00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 1998 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3264 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.072 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.073 The recommended git tool is: git 00:00:00.073 using credential 00000000-0000-0000-0000-000000000002 00:00:00.075 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.100 Fetching changes from the remote Git repository 00:00:00.103 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.133 Using shallow fetch with depth 1 00:00:00.133 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.133 > git --version # timeout=10 00:00:00.157 > git --version # 'git version 2.39.2' 00:00:00.157 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.176 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.176 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.587 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.598 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.610 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:04.610 > git config core.sparsecheckout # timeout=10 00:00:04.619 > git read-tree -mu HEAD # timeout=10 00:00:04.636 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:04.657 Commit message: "inventory: add WCP3 to free inventory" 00:00:04.657 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:04.780 [Pipeline] Start of Pipeline 00:00:04.794 [Pipeline] library 00:00:04.796 Loading library shm_lib@master 00:00:04.796 Library shm_lib@master is cached. Copying from home. 00:00:04.810 [Pipeline] node 00:00:04.819 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest 00:00:04.820 [Pipeline] { 00:00:04.828 [Pipeline] catchError 00:00:04.829 [Pipeline] { 00:00:04.838 [Pipeline] wrap 00:00:04.844 [Pipeline] { 00:00:04.850 [Pipeline] stage 00:00:04.852 [Pipeline] { (Prologue) 00:00:04.866 [Pipeline] echo 00:00:04.867 Node: VM-host-SM9 00:00:04.872 [Pipeline] cleanWs 00:00:04.883 [WS-CLEANUP] Deleting project workspace... 00:00:04.883 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.889 [WS-CLEANUP] done 00:00:05.058 [Pipeline] setCustomBuildProperty 00:00:05.146 [Pipeline] httpRequest 00:00:05.166 [Pipeline] echo 00:00:05.167 Sorcerer 10.211.164.101 is alive 00:00:05.174 [Pipeline] httpRequest 00:00:05.178 HttpMethod: GET 00:00:05.178 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.178 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.179 Response Code: HTTP/1.1 200 OK 00:00:05.180 Success: Status code 200 is in the accepted range: 200,404 00:00:05.180 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:07.026 [Pipeline] sh 00:00:07.307 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:07.321 [Pipeline] httpRequest 00:00:07.342 [Pipeline] echo 00:00:07.344 Sorcerer 10.211.164.101 is alive 00:00:07.352 [Pipeline] httpRequest 00:00:07.357 HttpMethod: GET 00:00:07.357 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:07.358 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:07.359 Response Code: HTTP/1.1 200 OK 00:00:07.359 Success: Status code 200 is in the accepted range: 200,404 00:00:07.360 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:27.111 [Pipeline] sh 00:00:27.392 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:29.943 [Pipeline] sh 00:00:30.250 + git -C spdk log --oneline -n5 00:00:30.250 719d03c6a sock/uring: only register net impl if supported 00:00:30.250 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:30.250 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:30.250 6c7c1f57e accel: add sequence outstanding stat 00:00:30.250 3bc8e6a26 accel: add utility to put task 00:00:30.272 [Pipeline] withCredentials 00:00:30.282 > git --version # timeout=10 00:00:30.297 > git --version # 'git version 2.39.2' 00:00:30.314 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:30.316 [Pipeline] { 00:00:30.326 [Pipeline] retry 00:00:30.329 [Pipeline] { 00:00:30.347 [Pipeline] sh 00:00:30.628 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:31.209 [Pipeline] } 00:00:31.233 [Pipeline] // retry 00:00:31.240 [Pipeline] } 00:00:31.261 [Pipeline] // withCredentials 00:00:31.274 [Pipeline] httpRequest 00:00:31.297 [Pipeline] echo 00:00:31.298 Sorcerer 10.211.164.101 is alive 00:00:31.304 [Pipeline] httpRequest 00:00:31.307 HttpMethod: GET 00:00:31.307 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:31.308 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:31.318 Response Code: HTTP/1.1 200 OK 00:00:31.319 Success: Status code 200 is in the accepted range: 200,404 00:00:31.319 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:54.800 [Pipeline] sh 00:00:55.079 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:56.468 [Pipeline] sh 00:00:56.745 + git -C dpdk log --oneline -n5 00:00:56.745 caf0f5d395 version: 22.11.4 00:00:56.745 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:00:56.745 dc9c799c7d vhost: fix missing spinlock unlock 00:00:56.745 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:00:56.745 6ef77f2a5e net/gve: fix RX buffer size alignment 00:00:56.761 [Pipeline] writeFile 00:00:56.775 [Pipeline] sh 00:00:57.056 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:57.068 [Pipeline] sh 00:00:57.349 + cat autorun-spdk.conf 00:00:57.349 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.349 SPDK_TEST_NVME=1 00:00:57.349 SPDK_TEST_FTL=1 00:00:57.349 SPDK_TEST_ISAL=1 00:00:57.349 SPDK_RUN_ASAN=1 00:00:57.349 SPDK_RUN_UBSAN=1 00:00:57.349 SPDK_TEST_XNVME=1 00:00:57.349 SPDK_TEST_NVME_FDP=1 00:00:57.349 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:57.349 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:57.349 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:57.356 RUN_NIGHTLY=1 00:00:57.358 [Pipeline] } 00:00:57.374 [Pipeline] // stage 00:00:57.387 [Pipeline] stage 00:00:57.388 [Pipeline] { (Run VM) 00:00:57.400 [Pipeline] sh 00:00:57.681 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:57.681 + echo 'Start stage prepare_nvme.sh' 00:00:57.681 Start stage prepare_nvme.sh 00:00:57.681 + [[ -n 1 ]] 00:00:57.681 + disk_prefix=ex1 00:00:57.681 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:57.681 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:57.681 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:57.681 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.681 ++ SPDK_TEST_NVME=1 00:00:57.681 ++ SPDK_TEST_FTL=1 00:00:57.681 ++ SPDK_TEST_ISAL=1 00:00:57.681 ++ SPDK_RUN_ASAN=1 00:00:57.681 ++ SPDK_RUN_UBSAN=1 00:00:57.681 ++ SPDK_TEST_XNVME=1 00:00:57.681 ++ SPDK_TEST_NVME_FDP=1 00:00:57.681 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:57.681 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:57.681 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:57.681 ++ RUN_NIGHTLY=1 00:00:57.681 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:57.681 + nvme_files=() 00:00:57.681 + declare -A nvme_files 00:00:57.681 + backend_dir=/var/lib/libvirt/images/backends 00:00:57.681 + nvme_files['nvme.img']=5G 00:00:57.681 + nvme_files['nvme-cmb.img']=5G 00:00:57.681 + nvme_files['nvme-multi0.img']=4G 00:00:57.681 + nvme_files['nvme-multi1.img']=4G 00:00:57.681 + nvme_files['nvme-multi2.img']=4G 00:00:57.681 + nvme_files['nvme-openstack.img']=8G 00:00:57.681 + nvme_files['nvme-zns.img']=5G 00:00:57.681 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:57.681 + (( SPDK_TEST_FTL == 1 )) 00:00:57.681 + nvme_files["nvme-ftl.img"]=6G 00:00:57.681 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:57.681 + nvme_files["nvme-fdp.img"]=1G 00:00:57.681 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:57.681 + for nvme in "${!nvme_files[@]}" 00:00:57.681 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:00:57.681 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:57.681 + for nvme in "${!nvme_files[@]}" 00:00:57.681 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:00:57.681 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:57.681 + for nvme in "${!nvme_files[@]}" 00:00:57.681 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:00:57.941 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:57.941 + for nvme in "${!nvme_files[@]}" 00:00:57.941 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:00:57.941 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:57.941 + for nvme in "${!nvme_files[@]}" 00:00:57.941 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:00:57.941 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:57.941 + for nvme in "${!nvme_files[@]}" 00:00:57.941 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:00:57.941 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:57.941 + for nvme in "${!nvme_files[@]}" 00:00:57.941 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:00:57.941 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:57.941 + for nvme in "${!nvme_files[@]}" 00:00:57.941 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:00:57.941 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:57.941 + for nvme in "${!nvme_files[@]}" 00:00:57.941 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:00:58.199 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:58.199 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:00:58.199 + echo 'End stage prepare_nvme.sh' 00:00:58.199 End stage prepare_nvme.sh 00:00:58.211 [Pipeline] sh 00:00:58.492 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:58.492 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:00:58.752 00:00:58.752 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:58.752 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:58.752 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:58.752 HELP=0 00:00:58.752 DRY_RUN=0 00:00:58.752 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:00:58.752 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:58.752 NVME_AUTO_CREATE=0 00:00:58.752 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:00:58.752 NVME_CMB=,,,, 00:00:58.752 NVME_PMR=,,,, 00:00:58.752 NVME_ZNS=,,,, 00:00:58.752 NVME_MS=true,,,, 00:00:58.752 NVME_FDP=,,,on, 00:00:58.752 SPDK_VAGRANT_DISTRO=fedora38 00:00:58.752 SPDK_VAGRANT_VMCPU=10 00:00:58.752 SPDK_VAGRANT_VMRAM=12288 00:00:58.752 SPDK_VAGRANT_PROVIDER=libvirt 00:00:58.752 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:58.752 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:58.752 SPDK_OPENSTACK_NETWORK=0 00:00:58.752 VAGRANT_PACKAGE_BOX=0 00:00:58.752 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:58.752 FORCE_DISTRO=true 00:00:58.752 VAGRANT_BOX_VERSION= 00:00:58.752 EXTRA_VAGRANTFILES= 00:00:58.752 NIC_MODEL=e1000 00:00:58.752 00:00:58.752 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:00:58.752 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:02.040 Bringing machine 'default' up with 'libvirt' provider... 00:01:02.299 ==> default: Creating image (snapshot of base box volume). 00:01:02.299 ==> default: Creating domain with the following settings... 00:01:02.299 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720849673_a123de183fff3775f5da 00:01:02.299 ==> default: -- Domain type: kvm 00:01:02.299 ==> default: -- Cpus: 10 00:01:02.299 ==> default: -- Feature: acpi 00:01:02.299 ==> default: -- Feature: apic 00:01:02.299 ==> default: -- Feature: pae 00:01:02.299 ==> default: -- Memory: 12288M 00:01:02.299 ==> default: -- Memory Backing: hugepages: 00:01:02.299 ==> default: -- Management MAC: 00:01:02.299 ==> default: -- Loader: 00:01:02.299 ==> default: -- Nvram: 00:01:02.299 ==> default: -- Base box: spdk/fedora38 00:01:02.299 ==> default: -- Storage pool: default 00:01:02.299 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720849673_a123de183fff3775f5da.img (20G) 00:01:02.299 ==> default: -- Volume Cache: default 00:01:02.299 ==> default: -- Kernel: 00:01:02.299 ==> default: -- Initrd: 00:01:02.299 ==> default: -- Graphics Type: vnc 00:01:02.299 ==> default: -- Graphics Port: -1 00:01:02.299 ==> default: -- Graphics IP: 127.0.0.1 00:01:02.299 ==> default: -- Graphics Password: Not defined 00:01:02.299 ==> default: -- Video Type: cirrus 00:01:02.299 ==> default: -- Video VRAM: 9216 00:01:02.299 ==> default: -- Sound Type: 00:01:02.299 ==> default: -- Keymap: en-us 00:01:02.299 ==> default: -- TPM Path: 00:01:02.299 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:02.299 ==> default: -- Command line args: 00:01:02.299 ==> default: -> value=-device, 00:01:02.299 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:02.299 ==> default: -> value=-drive, 00:01:02.299 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:02.299 ==> default: -> value=-device, 00:01:02.299 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:02.299 ==> default: -> value=-device, 00:01:02.299 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:02.299 ==> default: -> value=-drive, 00:01:02.299 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:01:02.299 ==> default: -> value=-device, 00:01:02.299 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:02.299 ==> default: -> value=-device, 00:01:02.299 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:02.299 ==> default: -> value=-drive, 00:01:02.299 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:02.299 ==> default: -> value=-device, 00:01:02.299 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:02.299 ==> default: -> value=-drive, 00:01:02.299 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:02.299 ==> default: -> value=-device, 00:01:02.299 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:02.299 ==> default: -> value=-drive, 00:01:02.299 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:02.299 ==> default: -> value=-device, 00:01:02.299 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:02.299 ==> default: -> value=-device, 00:01:02.299 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:02.299 ==> default: -> value=-device, 00:01:02.299 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:02.299 ==> default: -> value=-drive, 00:01:02.299 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:02.299 ==> default: -> value=-device, 00:01:02.299 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:02.558 ==> default: Creating shared folders metadata... 00:01:02.558 ==> default: Starting domain. 00:01:03.939 ==> default: Waiting for domain to get an IP address... 00:01:18.818 ==> default: Waiting for SSH to become available... 00:01:20.201 ==> default: Configuring and enabling network interfaces... 00:01:25.481 default: SSH address: 192.168.121.229:22 00:01:25.481 default: SSH username: vagrant 00:01:25.481 default: SSH auth method: private key 00:01:26.867 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:33.500 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:40.063 ==> default: Mounting SSHFS shared folder... 00:01:41.438 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:41.438 ==> default: Checking Mount.. 00:01:42.809 ==> default: Folder Successfully Mounted! 00:01:42.809 ==> default: Running provisioner: file... 00:01:43.375 default: ~/.gitconfig => .gitconfig 00:01:43.944 00:01:43.944 SUCCESS! 00:01:43.944 00:01:43.944 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:43.944 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:43.944 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:43.944 00:01:43.951 [Pipeline] } 00:01:43.963 [Pipeline] // stage 00:01:43.969 [Pipeline] dir 00:01:43.969 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:01:43.970 [Pipeline] { 00:01:43.980 [Pipeline] catchError 00:01:43.981 [Pipeline] { 00:01:43.993 [Pipeline] sh 00:01:44.273 + vagrant ssh-config --host vagrant 00:01:44.273 + sed -ne /^Host/,$p 00:01:44.273 + tee ssh_conf 00:01:47.560 Host vagrant 00:01:47.560 HostName 192.168.121.229 00:01:47.560 User vagrant 00:01:47.560 Port 22 00:01:47.560 UserKnownHostsFile /dev/null 00:01:47.560 StrictHostKeyChecking no 00:01:47.560 PasswordAuthentication no 00:01:47.560 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:47.560 IdentitiesOnly yes 00:01:47.560 LogLevel FATAL 00:01:47.560 ForwardAgent yes 00:01:47.560 ForwardX11 yes 00:01:47.560 00:01:47.576 [Pipeline] withEnv 00:01:47.580 [Pipeline] { 00:01:47.599 [Pipeline] sh 00:01:47.877 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:47.877 source /etc/os-release 00:01:47.877 [[ -e /image.version ]] && img=$(< /image.version) 00:01:47.877 # Minimal, systemd-like check. 00:01:47.877 if [[ -e /.dockerenv ]]; then 00:01:47.877 # Clear garbage from the node's name: 00:01:47.877 # agt-er_autotest_547-896 -> autotest_547-896 00:01:47.877 # $HOSTNAME is the actual container id 00:01:47.877 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:47.877 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:47.877 # We can assume this is a mount from a host where container is running, 00:01:47.877 # so fetch its hostname to easily identify the target swarm worker. 00:01:47.877 container="$(< /etc/hostname) ($agent)" 00:01:47.877 else 00:01:47.877 # Fallback 00:01:47.877 container=$agent 00:01:47.877 fi 00:01:47.877 fi 00:01:47.877 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:47.877 00:01:47.890 [Pipeline] } 00:01:47.911 [Pipeline] // withEnv 00:01:47.920 [Pipeline] setCustomBuildProperty 00:01:47.935 [Pipeline] stage 00:01:47.938 [Pipeline] { (Tests) 00:01:47.957 [Pipeline] sh 00:01:48.238 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:48.513 [Pipeline] sh 00:01:48.798 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:49.076 [Pipeline] timeout 00:01:49.076 Timeout set to expire in 40 min 00:01:49.079 [Pipeline] { 00:01:49.100 [Pipeline] sh 00:01:49.379 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:49.947 HEAD is now at 719d03c6a sock/uring: only register net impl if supported 00:01:49.960 [Pipeline] sh 00:01:50.238 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:50.509 [Pipeline] sh 00:01:50.792 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:51.067 [Pipeline] sh 00:01:51.348 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:01:51.348 ++ readlink -f spdk_repo 00:01:51.348 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:51.348 + [[ -n /home/vagrant/spdk_repo ]] 00:01:51.348 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:51.348 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:51.348 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:51.348 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:51.348 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:51.348 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:51.348 + cd /home/vagrant/spdk_repo 00:01:51.348 + source /etc/os-release 00:01:51.348 ++ NAME='Fedora Linux' 00:01:51.348 ++ VERSION='38 (Cloud Edition)' 00:01:51.348 ++ ID=fedora 00:01:51.348 ++ VERSION_ID=38 00:01:51.348 ++ VERSION_CODENAME= 00:01:51.348 ++ PLATFORM_ID=platform:f38 00:01:51.348 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:51.348 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:51.348 ++ LOGO=fedora-logo-icon 00:01:51.348 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:51.348 ++ HOME_URL=https://fedoraproject.org/ 00:01:51.348 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:51.348 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:51.348 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:51.348 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:51.348 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:51.348 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:51.348 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:51.348 ++ SUPPORT_END=2024-05-14 00:01:51.348 ++ VARIANT='Cloud Edition' 00:01:51.348 ++ VARIANT_ID=cloud 00:01:51.348 + uname -a 00:01:51.606 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:51.606 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:51.863 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:52.122 Hugepages 00:01:52.122 node hugesize free / total 00:01:52.122 node0 1048576kB 0 / 0 00:01:52.122 node0 2048kB 0 / 0 00:01:52.122 00:01:52.122 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:52.122 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:52.122 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:52.122 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:52.122 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:52.381 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:52.381 + rm -f /tmp/spdk-ld-path 00:01:52.381 + source autorun-spdk.conf 00:01:52.381 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:52.381 ++ SPDK_TEST_NVME=1 00:01:52.381 ++ SPDK_TEST_FTL=1 00:01:52.381 ++ SPDK_TEST_ISAL=1 00:01:52.381 ++ SPDK_RUN_ASAN=1 00:01:52.381 ++ SPDK_RUN_UBSAN=1 00:01:52.381 ++ SPDK_TEST_XNVME=1 00:01:52.381 ++ SPDK_TEST_NVME_FDP=1 00:01:52.381 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:52.381 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:52.381 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:52.381 ++ RUN_NIGHTLY=1 00:01:52.381 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:52.381 + [[ -n '' ]] 00:01:52.381 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:52.381 + for M in /var/spdk/build-*-manifest.txt 00:01:52.381 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:52.381 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:52.381 + for M in /var/spdk/build-*-manifest.txt 00:01:52.381 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:52.381 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:52.381 ++ uname 00:01:52.381 + [[ Linux == \L\i\n\u\x ]] 00:01:52.381 + sudo dmesg -T 00:01:52.381 + sudo dmesg --clear 00:01:52.381 + dmesg_pid=5938 00:01:52.381 + sudo dmesg -Tw 00:01:52.381 + [[ Fedora Linux == FreeBSD ]] 00:01:52.381 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:52.381 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:52.381 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:52.381 + [[ -x /usr/src/fio-static/fio ]] 00:01:52.381 + export FIO_BIN=/usr/src/fio-static/fio 00:01:52.381 + FIO_BIN=/usr/src/fio-static/fio 00:01:52.381 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:52.381 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:52.381 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:52.381 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:52.381 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:52.381 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:52.381 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:52.381 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:52.381 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:52.381 Test configuration: 00:01:52.381 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:52.381 SPDK_TEST_NVME=1 00:01:52.381 SPDK_TEST_FTL=1 00:01:52.381 SPDK_TEST_ISAL=1 00:01:52.381 SPDK_RUN_ASAN=1 00:01:52.381 SPDK_RUN_UBSAN=1 00:01:52.381 SPDK_TEST_XNVME=1 00:01:52.381 SPDK_TEST_NVME_FDP=1 00:01:52.381 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:52.381 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:52.381 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:52.381 RUN_NIGHTLY=1 05:48:44 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:52.381 05:48:44 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:52.381 05:48:44 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:52.381 05:48:44 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:52.381 05:48:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.381 05:48:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.381 05:48:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.381 05:48:44 -- paths/export.sh@5 -- $ export PATH 00:01:52.381 05:48:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.381 05:48:44 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:52.381 05:48:44 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:52.381 05:48:44 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720849724.XXXXXX 00:01:52.381 05:48:44 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720849724.4uOeni 00:01:52.381 05:48:44 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:52.381 05:48:44 -- common/autobuild_common.sh@450 -- $ '[' -n v22.11.4 ']' 00:01:52.381 05:48:44 -- common/autobuild_common.sh@451 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:52.381 05:48:44 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:01:52.381 05:48:44 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:52.381 05:48:44 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:52.381 05:48:44 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:52.381 05:48:44 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:52.381 05:48:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.381 05:48:44 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:01:52.381 05:48:44 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:52.381 05:48:44 -- pm/common@17 -- $ local monitor 00:01:52.381 05:48:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.381 05:48:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:52.381 05:48:44 -- pm/common@21 -- $ date +%s 00:01:52.381 05:48:44 -- pm/common@25 -- $ sleep 1 00:01:52.640 05:48:44 -- pm/common@21 -- $ date +%s 00:01:52.640 05:48:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720849724 00:01:52.640 05:48:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720849724 00:01:52.640 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720849724_collect-vmstat.pm.log 00:01:52.640 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720849724_collect-cpu-load.pm.log 00:01:53.597 05:48:45 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:53.597 05:48:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:53.597 05:48:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:53.597 05:48:45 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:53.597 05:48:45 -- spdk/autobuild.sh@16 -- $ date -u 00:01:53.597 Sat Jul 13 05:48:45 AM UTC 2024 00:01:53.597 05:48:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:53.597 v24.09-pre-202-g719d03c6a 00:01:53.597 05:48:45 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:53.597 05:48:45 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:53.597 05:48:45 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:53.597 05:48:45 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:53.597 05:48:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.597 ************************************ 00:01:53.597 START TEST asan 00:01:53.597 ************************************ 00:01:53.597 using asan 00:01:53.597 05:48:45 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:01:53.597 00:01:53.597 real 0m0.000s 00:01:53.597 user 0m0.000s 00:01:53.597 sys 0m0.000s 00:01:53.597 05:48:45 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:53.597 ************************************ 00:01:53.597 05:48:45 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:53.597 END TEST asan 00:01:53.597 ************************************ 00:01:53.597 05:48:45 -- common/autotest_common.sh@1142 -- $ return 0 00:01:53.597 05:48:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:53.597 05:48:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:53.597 05:48:45 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:53.597 05:48:45 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:53.597 05:48:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.597 ************************************ 00:01:53.597 START TEST ubsan 00:01:53.597 ************************************ 00:01:53.597 using ubsan 00:01:53.597 05:48:45 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:53.597 00:01:53.597 real 0m0.000s 00:01:53.597 user 0m0.000s 00:01:53.597 sys 0m0.000s 00:01:53.597 05:48:45 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:53.597 05:48:45 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:53.597 ************************************ 00:01:53.597 END TEST ubsan 00:01:53.598 ************************************ 00:01:53.598 05:48:45 -- common/autotest_common.sh@1142 -- $ return 0 00:01:53.598 05:48:45 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:53.598 05:48:45 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:53.598 05:48:45 -- common/autobuild_common.sh@436 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:53.598 05:48:45 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:53.598 05:48:45 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:53.598 05:48:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.598 ************************************ 00:01:53.598 START TEST build_native_dpdk 00:01:53.598 ************************************ 00:01:53.598 05:48:45 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:01:53.598 caf0f5d395 version: 22.11.4 00:01:53.598 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:53.598 dc9c799c7d vhost: fix missing spinlock unlock 00:01:53.598 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:53.598 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:53.598 05:48:45 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:53.598 05:48:45 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:53.598 05:48:45 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:53.598 05:48:45 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:53.598 05:48:45 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:53.598 05:48:45 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:53.598 05:48:45 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:53.598 05:48:45 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:53.598 05:48:45 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:53.598 05:48:45 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:53.598 05:48:45 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:53.598 05:48:45 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:53.598 05:48:45 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:53.598 05:48:45 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:53.598 05:48:45 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:53.598 05:48:45 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:53.598 05:48:45 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:53.598 05:48:45 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:53.598 05:48:45 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:53.598 05:48:45 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:53.598 05:48:45 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:53.598 05:48:45 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:53.598 05:48:45 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:53.598 05:48:45 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:53.598 05:48:45 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:53.598 05:48:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:53.598 05:48:45 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:53.598 patching file config/rte_config.h 00:01:53.598 Hunk #1 succeeded at 60 (offset 1 line). 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:53.598 05:48:45 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:58.891 The Meson build system 00:01:58.891 Version: 1.3.1 00:01:58.891 Source dir: /home/vagrant/spdk_repo/dpdk 00:01:58.891 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:01:58.891 Build type: native build 00:01:58.891 Program cat found: YES (/usr/bin/cat) 00:01:58.891 Project name: DPDK 00:01:58.891 Project version: 22.11.4 00:01:58.891 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:58.891 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:58.891 Host machine cpu family: x86_64 00:01:58.891 Host machine cpu: x86_64 00:01:58.891 Message: ## Building in Developer Mode ## 00:01:58.891 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:58.891 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:01:58.891 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:01:58.892 Program objdump found: YES (/usr/bin/objdump) 00:01:58.892 Program python3 found: YES (/usr/bin/python3) 00:01:58.892 Program cat found: YES (/usr/bin/cat) 00:01:58.892 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:58.892 Checking for size of "void *" : 8 00:01:58.892 Checking for size of "void *" : 8 (cached) 00:01:58.892 Library m found: YES 00:01:58.892 Library numa found: YES 00:01:58.892 Has header "numaif.h" : YES 00:01:58.892 Library fdt found: NO 00:01:58.892 Library execinfo found: NO 00:01:58.892 Has header "execinfo.h" : YES 00:01:58.892 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:58.892 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:58.892 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:58.892 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:58.892 Run-time dependency openssl found: YES 3.0.9 00:01:58.892 Run-time dependency libpcap found: YES 1.10.4 00:01:58.892 Has header "pcap.h" with dependency libpcap: YES 00:01:58.892 Compiler for C supports arguments -Wcast-qual: YES 00:01:58.892 Compiler for C supports arguments -Wdeprecated: YES 00:01:58.892 Compiler for C supports arguments -Wformat: YES 00:01:58.892 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:58.892 Compiler for C supports arguments -Wformat-security: NO 00:01:58.892 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:58.892 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:58.892 Compiler for C supports arguments -Wnested-externs: YES 00:01:58.892 Compiler for C supports arguments -Wold-style-definition: YES 00:01:58.892 Compiler for C supports arguments -Wpointer-arith: YES 00:01:58.892 Compiler for C supports arguments -Wsign-compare: YES 00:01:58.892 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:58.892 Compiler for C supports arguments -Wundef: YES 00:01:58.892 Compiler for C supports arguments -Wwrite-strings: YES 00:01:58.892 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:58.892 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:58.892 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:58.892 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:58.892 Compiler for C supports arguments -mavx512f: YES 00:01:58.892 Checking if "AVX512 checking" compiles: YES 00:01:58.892 Fetching value of define "__SSE4_2__" : 1 00:01:58.892 Fetching value of define "__AES__" : 1 00:01:58.892 Fetching value of define "__AVX__" : 1 00:01:58.892 Fetching value of define "__AVX2__" : 1 00:01:58.892 Fetching value of define "__AVX512BW__" : (undefined) 00:01:58.892 Fetching value of define "__AVX512CD__" : (undefined) 00:01:58.892 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:58.892 Fetching value of define "__AVX512F__" : (undefined) 00:01:58.892 Fetching value of define "__AVX512VL__" : (undefined) 00:01:58.892 Fetching value of define "__PCLMUL__" : 1 00:01:58.892 Fetching value of define "__RDRND__" : 1 00:01:58.892 Fetching value of define "__RDSEED__" : 1 00:01:58.892 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:58.892 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:58.892 Message: lib/kvargs: Defining dependency "kvargs" 00:01:58.892 Message: lib/telemetry: Defining dependency "telemetry" 00:01:58.892 Checking for function "getentropy" : YES 00:01:58.892 Message: lib/eal: Defining dependency "eal" 00:01:58.892 Message: lib/ring: Defining dependency "ring" 00:01:58.892 Message: lib/rcu: Defining dependency "rcu" 00:01:58.892 Message: lib/mempool: Defining dependency "mempool" 00:01:58.892 Message: lib/mbuf: Defining dependency "mbuf" 00:01:58.892 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:58.892 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:58.892 Compiler for C supports arguments -mpclmul: YES 00:01:58.892 Compiler for C supports arguments -maes: YES 00:01:58.892 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:58.892 Compiler for C supports arguments -mavx512bw: YES 00:01:58.892 Compiler for C supports arguments -mavx512dq: YES 00:01:58.892 Compiler for C supports arguments -mavx512vl: YES 00:01:58.892 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:58.892 Compiler for C supports arguments -mavx2: YES 00:01:58.892 Compiler for C supports arguments -mavx: YES 00:01:58.892 Message: lib/net: Defining dependency "net" 00:01:58.892 Message: lib/meter: Defining dependency "meter" 00:01:58.892 Message: lib/ethdev: Defining dependency "ethdev" 00:01:58.892 Message: lib/pci: Defining dependency "pci" 00:01:58.892 Message: lib/cmdline: Defining dependency "cmdline" 00:01:58.892 Message: lib/metrics: Defining dependency "metrics" 00:01:58.892 Message: lib/hash: Defining dependency "hash" 00:01:58.892 Message: lib/timer: Defining dependency "timer" 00:01:58.892 Fetching value of define "__AVX2__" : 1 (cached) 00:01:58.892 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:58.892 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:58.892 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:58.892 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:58.892 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:58.892 Message: lib/acl: Defining dependency "acl" 00:01:58.892 Message: lib/bbdev: Defining dependency "bbdev" 00:01:58.892 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:58.892 Run-time dependency libelf found: YES 0.190 00:01:58.892 Message: lib/bpf: Defining dependency "bpf" 00:01:58.892 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:58.892 Message: lib/compressdev: Defining dependency "compressdev" 00:01:58.892 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:58.892 Message: lib/distributor: Defining dependency "distributor" 00:01:58.892 Message: lib/efd: Defining dependency "efd" 00:01:58.892 Message: lib/eventdev: Defining dependency "eventdev" 00:01:58.892 Message: lib/gpudev: Defining dependency "gpudev" 00:01:58.892 Message: lib/gro: Defining dependency "gro" 00:01:58.892 Message: lib/gso: Defining dependency "gso" 00:01:58.892 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:58.892 Message: lib/jobstats: Defining dependency "jobstats" 00:01:58.892 Message: lib/latencystats: Defining dependency "latencystats" 00:01:58.892 Message: lib/lpm: Defining dependency "lpm" 00:01:58.892 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:58.892 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:58.892 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:58.892 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:58.892 Message: lib/member: Defining dependency "member" 00:01:58.892 Message: lib/pcapng: Defining dependency "pcapng" 00:01:58.892 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:58.892 Message: lib/power: Defining dependency "power" 00:01:58.892 Message: lib/rawdev: Defining dependency "rawdev" 00:01:58.892 Message: lib/regexdev: Defining dependency "regexdev" 00:01:58.892 Message: lib/dmadev: Defining dependency "dmadev" 00:01:58.892 Message: lib/rib: Defining dependency "rib" 00:01:58.892 Message: lib/reorder: Defining dependency "reorder" 00:01:58.892 Message: lib/sched: Defining dependency "sched" 00:01:58.892 Message: lib/security: Defining dependency "security" 00:01:58.892 Message: lib/stack: Defining dependency "stack" 00:01:58.892 Has header "linux/userfaultfd.h" : YES 00:01:58.892 Message: lib/vhost: Defining dependency "vhost" 00:01:58.892 Message: lib/ipsec: Defining dependency "ipsec" 00:01:58.892 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:58.892 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:58.892 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:58.892 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:58.892 Message: lib/fib: Defining dependency "fib" 00:01:58.892 Message: lib/port: Defining dependency "port" 00:01:58.892 Message: lib/pdump: Defining dependency "pdump" 00:01:58.892 Message: lib/table: Defining dependency "table" 00:01:58.892 Message: lib/pipeline: Defining dependency "pipeline" 00:01:58.892 Message: lib/graph: Defining dependency "graph" 00:01:58.892 Message: lib/node: Defining dependency "node" 00:01:58.892 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:58.892 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:58.892 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:58.892 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:58.892 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:58.892 Compiler for C supports arguments -Wno-unused-value: YES 00:01:58.892 Compiler for C supports arguments -Wno-format: YES 00:01:58.892 Compiler for C supports arguments -Wno-format-security: YES 00:01:58.892 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:00.269 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:00.269 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:00.269 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:00.269 Fetching value of define "__AVX2__" : 1 (cached) 00:02:00.269 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:00.269 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:00.269 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:00.269 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:00.269 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:00.269 Program doxygen found: YES (/usr/bin/doxygen) 00:02:00.269 Configuring doxy-api.conf using configuration 00:02:00.269 Program sphinx-build found: NO 00:02:00.269 Configuring rte_build_config.h using configuration 00:02:00.269 Message: 00:02:00.269 ================= 00:02:00.269 Applications Enabled 00:02:00.269 ================= 00:02:00.269 00:02:00.269 apps: 00:02:00.269 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:00.269 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:00.269 test-security-perf, 00:02:00.269 00:02:00.269 Message: 00:02:00.269 ================= 00:02:00.269 Libraries Enabled 00:02:00.269 ================= 00:02:00.269 00:02:00.269 libs: 00:02:00.269 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:00.269 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:00.269 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:00.269 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:00.269 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:00.269 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:00.269 table, pipeline, graph, node, 00:02:00.269 00:02:00.269 Message: 00:02:00.269 =============== 00:02:00.269 Drivers Enabled 00:02:00.269 =============== 00:02:00.269 00:02:00.269 common: 00:02:00.269 00:02:00.269 bus: 00:02:00.269 pci, vdev, 00:02:00.269 mempool: 00:02:00.269 ring, 00:02:00.269 dma: 00:02:00.269 00:02:00.269 net: 00:02:00.269 i40e, 00:02:00.269 raw: 00:02:00.269 00:02:00.269 crypto: 00:02:00.269 00:02:00.269 compress: 00:02:00.269 00:02:00.269 regex: 00:02:00.269 00:02:00.269 vdpa: 00:02:00.269 00:02:00.269 event: 00:02:00.269 00:02:00.269 baseband: 00:02:00.269 00:02:00.269 gpu: 00:02:00.269 00:02:00.269 00:02:00.269 Message: 00:02:00.269 ================= 00:02:00.269 Content Skipped 00:02:00.269 ================= 00:02:00.269 00:02:00.269 apps: 00:02:00.269 00:02:00.269 libs: 00:02:00.269 kni: explicitly disabled via build config (deprecated lib) 00:02:00.269 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:00.269 00:02:00.269 drivers: 00:02:00.269 common/cpt: not in enabled drivers build config 00:02:00.269 common/dpaax: not in enabled drivers build config 00:02:00.269 common/iavf: not in enabled drivers build config 00:02:00.269 common/idpf: not in enabled drivers build config 00:02:00.269 common/mvep: not in enabled drivers build config 00:02:00.269 common/octeontx: not in enabled drivers build config 00:02:00.269 bus/auxiliary: not in enabled drivers build config 00:02:00.269 bus/dpaa: not in enabled drivers build config 00:02:00.269 bus/fslmc: not in enabled drivers build config 00:02:00.269 bus/ifpga: not in enabled drivers build config 00:02:00.269 bus/vmbus: not in enabled drivers build config 00:02:00.269 common/cnxk: not in enabled drivers build config 00:02:00.269 common/mlx5: not in enabled drivers build config 00:02:00.269 common/qat: not in enabled drivers build config 00:02:00.269 common/sfc_efx: not in enabled drivers build config 00:02:00.269 mempool/bucket: not in enabled drivers build config 00:02:00.269 mempool/cnxk: not in enabled drivers build config 00:02:00.269 mempool/dpaa: not in enabled drivers build config 00:02:00.269 mempool/dpaa2: not in enabled drivers build config 00:02:00.269 mempool/octeontx: not in enabled drivers build config 00:02:00.269 mempool/stack: not in enabled drivers build config 00:02:00.269 dma/cnxk: not in enabled drivers build config 00:02:00.269 dma/dpaa: not in enabled drivers build config 00:02:00.269 dma/dpaa2: not in enabled drivers build config 00:02:00.269 dma/hisilicon: not in enabled drivers build config 00:02:00.269 dma/idxd: not in enabled drivers build config 00:02:00.269 dma/ioat: not in enabled drivers build config 00:02:00.269 dma/skeleton: not in enabled drivers build config 00:02:00.269 net/af_packet: not in enabled drivers build config 00:02:00.269 net/af_xdp: not in enabled drivers build config 00:02:00.269 net/ark: not in enabled drivers build config 00:02:00.269 net/atlantic: not in enabled drivers build config 00:02:00.269 net/avp: not in enabled drivers build config 00:02:00.269 net/axgbe: not in enabled drivers build config 00:02:00.269 net/bnx2x: not in enabled drivers build config 00:02:00.269 net/bnxt: not in enabled drivers build config 00:02:00.269 net/bonding: not in enabled drivers build config 00:02:00.269 net/cnxk: not in enabled drivers build config 00:02:00.269 net/cxgbe: not in enabled drivers build config 00:02:00.269 net/dpaa: not in enabled drivers build config 00:02:00.269 net/dpaa2: not in enabled drivers build config 00:02:00.269 net/e1000: not in enabled drivers build config 00:02:00.269 net/ena: not in enabled drivers build config 00:02:00.269 net/enetc: not in enabled drivers build config 00:02:00.269 net/enetfec: not in enabled drivers build config 00:02:00.269 net/enic: not in enabled drivers build config 00:02:00.269 net/failsafe: not in enabled drivers build config 00:02:00.269 net/fm10k: not in enabled drivers build config 00:02:00.269 net/gve: not in enabled drivers build config 00:02:00.269 net/hinic: not in enabled drivers build config 00:02:00.269 net/hns3: not in enabled drivers build config 00:02:00.269 net/iavf: not in enabled drivers build config 00:02:00.269 net/ice: not in enabled drivers build config 00:02:00.269 net/idpf: not in enabled drivers build config 00:02:00.269 net/igc: not in enabled drivers build config 00:02:00.269 net/ionic: not in enabled drivers build config 00:02:00.269 net/ipn3ke: not in enabled drivers build config 00:02:00.269 net/ixgbe: not in enabled drivers build config 00:02:00.269 net/kni: not in enabled drivers build config 00:02:00.269 net/liquidio: not in enabled drivers build config 00:02:00.269 net/mana: not in enabled drivers build config 00:02:00.269 net/memif: not in enabled drivers build config 00:02:00.269 net/mlx4: not in enabled drivers build config 00:02:00.269 net/mlx5: not in enabled drivers build config 00:02:00.269 net/mvneta: not in enabled drivers build config 00:02:00.269 net/mvpp2: not in enabled drivers build config 00:02:00.269 net/netvsc: not in enabled drivers build config 00:02:00.269 net/nfb: not in enabled drivers build config 00:02:00.269 net/nfp: not in enabled drivers build config 00:02:00.269 net/ngbe: not in enabled drivers build config 00:02:00.269 net/null: not in enabled drivers build config 00:02:00.269 net/octeontx: not in enabled drivers build config 00:02:00.269 net/octeon_ep: not in enabled drivers build config 00:02:00.270 net/pcap: not in enabled drivers build config 00:02:00.270 net/pfe: not in enabled drivers build config 00:02:00.270 net/qede: not in enabled drivers build config 00:02:00.270 net/ring: not in enabled drivers build config 00:02:00.270 net/sfc: not in enabled drivers build config 00:02:00.270 net/softnic: not in enabled drivers build config 00:02:00.270 net/tap: not in enabled drivers build config 00:02:00.270 net/thunderx: not in enabled drivers build config 00:02:00.270 net/txgbe: not in enabled drivers build config 00:02:00.270 net/vdev_netvsc: not in enabled drivers build config 00:02:00.270 net/vhost: not in enabled drivers build config 00:02:00.270 net/virtio: not in enabled drivers build config 00:02:00.270 net/vmxnet3: not in enabled drivers build config 00:02:00.270 raw/cnxk_bphy: not in enabled drivers build config 00:02:00.270 raw/cnxk_gpio: not in enabled drivers build config 00:02:00.270 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:00.270 raw/ifpga: not in enabled drivers build config 00:02:00.270 raw/ntb: not in enabled drivers build config 00:02:00.270 raw/skeleton: not in enabled drivers build config 00:02:00.270 crypto/armv8: not in enabled drivers build config 00:02:00.270 crypto/bcmfs: not in enabled drivers build config 00:02:00.270 crypto/caam_jr: not in enabled drivers build config 00:02:00.270 crypto/ccp: not in enabled drivers build config 00:02:00.270 crypto/cnxk: not in enabled drivers build config 00:02:00.270 crypto/dpaa_sec: not in enabled drivers build config 00:02:00.270 crypto/dpaa2_sec: not in enabled drivers build config 00:02:00.270 crypto/ipsec_mb: not in enabled drivers build config 00:02:00.270 crypto/mlx5: not in enabled drivers build config 00:02:00.270 crypto/mvsam: not in enabled drivers build config 00:02:00.270 crypto/nitrox: not in enabled drivers build config 00:02:00.270 crypto/null: not in enabled drivers build config 00:02:00.270 crypto/octeontx: not in enabled drivers build config 00:02:00.270 crypto/openssl: not in enabled drivers build config 00:02:00.270 crypto/scheduler: not in enabled drivers build config 00:02:00.270 crypto/uadk: not in enabled drivers build config 00:02:00.270 crypto/virtio: not in enabled drivers build config 00:02:00.270 compress/isal: not in enabled drivers build config 00:02:00.270 compress/mlx5: not in enabled drivers build config 00:02:00.270 compress/octeontx: not in enabled drivers build config 00:02:00.270 compress/zlib: not in enabled drivers build config 00:02:00.270 regex/mlx5: not in enabled drivers build config 00:02:00.270 regex/cn9k: not in enabled drivers build config 00:02:00.270 vdpa/ifc: not in enabled drivers build config 00:02:00.270 vdpa/mlx5: not in enabled drivers build config 00:02:00.270 vdpa/sfc: not in enabled drivers build config 00:02:00.270 event/cnxk: not in enabled drivers build config 00:02:00.270 event/dlb2: not in enabled drivers build config 00:02:00.270 event/dpaa: not in enabled drivers build config 00:02:00.270 event/dpaa2: not in enabled drivers build config 00:02:00.270 event/dsw: not in enabled drivers build config 00:02:00.270 event/opdl: not in enabled drivers build config 00:02:00.270 event/skeleton: not in enabled drivers build config 00:02:00.270 event/sw: not in enabled drivers build config 00:02:00.270 event/octeontx: not in enabled drivers build config 00:02:00.270 baseband/acc: not in enabled drivers build config 00:02:00.270 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:00.270 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:00.270 baseband/la12xx: not in enabled drivers build config 00:02:00.270 baseband/null: not in enabled drivers build config 00:02:00.270 baseband/turbo_sw: not in enabled drivers build config 00:02:00.270 gpu/cuda: not in enabled drivers build config 00:02:00.270 00:02:00.270 00:02:00.270 Build targets in project: 314 00:02:00.270 00:02:00.270 DPDK 22.11.4 00:02:00.270 00:02:00.270 User defined options 00:02:00.270 libdir : lib 00:02:00.270 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:00.270 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:00.270 c_link_args : 00:02:00.270 enable_docs : false 00:02:00.270 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:00.270 enable_kmods : false 00:02:00.270 machine : native 00:02:00.270 tests : false 00:02:00.270 00:02:00.270 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:00.270 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:00.270 05:48:51 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:00.270 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:00.270 [1/743] Generating lib/rte_telemetry_def with a custom command 00:02:00.270 [2/743] Generating lib/rte_telemetry_mingw with a custom command 00:02:00.270 [3/743] Generating lib/rte_kvargs_mingw with a custom command 00:02:00.270 [4/743] Generating lib/rte_kvargs_def with a custom command 00:02:00.270 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:00.270 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:00.270 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:00.270 [8/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:00.270 [9/743] Linking static target lib/librte_kvargs.a 00:02:00.270 [10/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:00.270 [11/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:00.270 [12/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:00.270 [13/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:00.530 [14/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:00.530 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:00.530 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:00.530 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:00.530 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:00.530 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:00.530 [20/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.530 [21/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:00.530 [22/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:00.530 [23/743] Linking target lib/librte_kvargs.so.23.0 00:02:00.530 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:00.789 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:00.789 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:00.789 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:00.789 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:00.789 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:00.789 [30/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:00.789 [31/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:00.789 [32/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:00.789 [33/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:00.789 [34/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:01.048 [35/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:01.048 [36/743] Linking static target lib/librte_telemetry.a 00:02:01.048 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:01.048 [38/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:01.048 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:01.048 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:01.048 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:01.048 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:01.305 [43/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:01.305 [44/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:01.305 [45/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:01.305 [46/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:01.305 [47/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.305 [48/743] Linking target lib/librte_telemetry.so.23.0 00:02:01.305 [49/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:01.305 [50/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:01.305 [51/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:01.305 [52/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:01.563 [53/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:01.563 [54/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:01.563 [55/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:01.563 [56/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:01.563 [57/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:01.563 [58/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:01.563 [59/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:01.563 [60/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:01.563 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:01.563 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:01.563 [63/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:01.563 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:01.563 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:01.563 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:01.563 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:01.563 [68/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:01.563 [69/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:01.821 [70/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:01.821 [71/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:01.821 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:01.821 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:01.821 [74/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:01.821 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:01.821 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:01.821 [77/743] Generating lib/rte_eal_def with a custom command 00:02:01.821 [78/743] Generating lib/rte_eal_mingw with a custom command 00:02:01.821 [79/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:01.821 [80/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:01.821 [81/743] Generating lib/rte_ring_def with a custom command 00:02:01.821 [82/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:01.821 [83/743] Generating lib/rte_ring_mingw with a custom command 00:02:01.821 [84/743] Generating lib/rte_rcu_def with a custom command 00:02:01.821 [85/743] Generating lib/rte_rcu_mingw with a custom command 00:02:01.821 [86/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:02.080 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:02.080 [88/743] Linking static target lib/librte_ring.a 00:02:02.080 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:02.080 [90/743] Generating lib/rte_mempool_def with a custom command 00:02:02.080 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:02:02.080 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:02.080 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:02.338 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.338 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:02.338 [96/743] Linking static target lib/librte_eal.a 00:02:02.597 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:02.597 [98/743] Generating lib/rte_mbuf_def with a custom command 00:02:02.597 [99/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:02.597 [100/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:02.597 [101/743] Generating lib/rte_mbuf_mingw with a custom command 00:02:02.597 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:02.597 [103/743] Linking static target lib/librte_rcu.a 00:02:02.855 [104/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:02.855 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:02.855 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:02.855 [107/743] Linking static target lib/librte_mempool.a 00:02:03.114 [108/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:03.114 [109/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.114 [110/743] Generating lib/rte_net_def with a custom command 00:02:03.114 [111/743] Generating lib/rte_net_mingw with a custom command 00:02:03.114 [112/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:03.114 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:03.114 [114/743] Generating lib/rte_meter_def with a custom command 00:02:03.114 [115/743] Generating lib/rte_meter_mingw with a custom command 00:02:03.372 [116/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:03.372 [117/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:03.373 [118/743] Linking static target lib/librte_meter.a 00:02:03.373 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:03.373 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:03.373 [121/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.631 [122/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:03.631 [123/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:03.631 [124/743] Linking static target lib/librte_net.a 00:02:03.631 [125/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:03.631 [126/743] Linking static target lib/librte_mbuf.a 00:02:03.631 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.889 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.889 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:03.889 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:04.147 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:04.147 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:04.147 [133/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:04.147 [134/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.407 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:04.665 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:04.665 [137/743] Generating lib/rte_ethdev_def with a custom command 00:02:04.665 [138/743] Generating lib/rte_ethdev_mingw with a custom command 00:02:04.924 [139/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:04.924 [140/743] Generating lib/rte_pci_def with a custom command 00:02:04.924 [141/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:04.924 [142/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:04.924 [143/743] Linking static target lib/librte_pci.a 00:02:04.924 [144/743] Generating lib/rte_pci_mingw with a custom command 00:02:04.924 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:04.924 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:04.924 [147/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:04.924 [148/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:04.924 [149/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.182 [150/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:05.182 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:05.182 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:05.182 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:05.182 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:05.182 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:05.182 [156/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:05.182 [157/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:05.182 [158/743] Generating lib/rte_cmdline_mingw with a custom command 00:02:05.182 [159/743] Generating lib/rte_cmdline_def with a custom command 00:02:05.182 [160/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:05.182 [161/743] Generating lib/rte_metrics_def with a custom command 00:02:05.182 [162/743] Generating lib/rte_metrics_mingw with a custom command 00:02:05.441 [163/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:05.441 [164/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:05.441 [165/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:05.441 [166/743] Generating lib/rte_hash_def with a custom command 00:02:05.441 [167/743] Generating lib/rte_hash_mingw with a custom command 00:02:05.441 [168/743] Generating lib/rte_timer_def with a custom command 00:02:05.441 [169/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:05.441 [170/743] Generating lib/rte_timer_mingw with a custom command 00:02:05.441 [171/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:05.699 [172/743] Linking static target lib/librte_cmdline.a 00:02:05.699 [173/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:05.958 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:05.958 [175/743] Linking static target lib/librte_metrics.a 00:02:05.958 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:05.958 [177/743] Linking static target lib/librte_timer.a 00:02:06.217 [178/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.217 [179/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.475 [180/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:06.475 [181/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:06.475 [182/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.475 [183/743] Linking static target lib/librte_ethdev.a 00:02:06.475 [184/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:07.040 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:07.040 [186/743] Generating lib/rte_acl_def with a custom command 00:02:07.040 [187/743] Generating lib/rte_acl_mingw with a custom command 00:02:07.040 [188/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:07.040 [189/743] Generating lib/rte_bbdev_def with a custom command 00:02:07.040 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:02:07.040 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:07.297 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:02:07.297 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:02:07.297 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:07.863 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:07.863 [196/743] Linking static target lib/librte_bitratestats.a 00:02:07.863 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:07.863 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.120 [199/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:08.120 [200/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:08.120 [201/743] Linking static target lib/librte_bbdev.a 00:02:08.378 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:08.378 [203/743] Linking static target lib/librte_hash.a 00:02:08.378 [204/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:08.378 [205/743] Linking static target lib/acl/libavx512_tmp.a 00:02:08.636 [206/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:08.636 [207/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:08.636 [208/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.636 [209/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:08.894 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.894 [211/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:08.894 [212/743] Generating lib/rte_bpf_def with a custom command 00:02:09.153 [213/743] Generating lib/rte_bpf_mingw with a custom command 00:02:09.153 [214/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:09.153 [215/743] Generating lib/rte_cfgfile_def with a custom command 00:02:09.153 [216/743] Generating lib/rte_cfgfile_mingw with a custom command 00:02:09.153 [217/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:09.412 [218/743] Linking static target lib/librte_acl.a 00:02:09.412 [219/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:09.412 [220/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:09.412 [221/743] Linking static target lib/librte_cfgfile.a 00:02:09.412 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:09.672 [223/743] Generating lib/rte_compressdev_def with a custom command 00:02:09.672 [224/743] Generating lib/rte_compressdev_mingw with a custom command 00:02:09.672 [225/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.672 [226/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.672 [227/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.672 [228/743] Linking target lib/librte_eal.so.23.0 00:02:09.672 [229/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:09.672 [230/743] Generating lib/rte_cryptodev_def with a custom command 00:02:09.672 [231/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:09.672 [232/743] Generating lib/rte_cryptodev_mingw with a custom command 00:02:09.932 [233/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:09.932 [234/743] Linking target lib/librte_ring.so.23.0 00:02:09.932 [235/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:09.932 [236/743] Linking target lib/librte_meter.so.23.0 00:02:09.932 [237/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:09.932 [238/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:09.932 [239/743] Linking target lib/librte_pci.so.23.0 00:02:09.932 [240/743] Linking target lib/librte_rcu.so.23.0 00:02:09.932 [241/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:10.191 [242/743] Linking target lib/librte_mempool.so.23.0 00:02:10.191 [243/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:10.191 [244/743] Linking target lib/librte_timer.so.23.0 00:02:10.191 [245/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:10.191 [246/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:10.191 [247/743] Linking static target lib/librte_bpf.a 00:02:10.191 [248/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:10.191 [249/743] Linking target lib/librte_acl.so.23.0 00:02:10.191 [250/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:10.191 [251/743] Linking target lib/librte_mbuf.so.23.0 00:02:10.191 [252/743] Linking target lib/librte_cfgfile.so.23.0 00:02:10.191 [253/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:10.191 [254/743] Linking static target lib/librte_compressdev.a 00:02:10.450 [255/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:10.450 [256/743] Generating lib/rte_distributor_def with a custom command 00:02:10.450 [257/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:10.450 [258/743] Generating lib/rte_distributor_mingw with a custom command 00:02:10.450 [259/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:10.450 [260/743] Linking target lib/librte_net.so.23.0 00:02:10.450 [261/743] Linking target lib/librte_bbdev.so.23.0 00:02:10.450 [262/743] Generating lib/rte_efd_def with a custom command 00:02:10.450 [263/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:10.450 [264/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.450 [265/743] Generating lib/rte_efd_mingw with a custom command 00:02:10.450 [266/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:10.709 [267/743] Linking target lib/librte_cmdline.so.23.0 00:02:10.709 [268/743] Linking target lib/librte_hash.so.23.0 00:02:10.709 [269/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:10.709 [270/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:10.968 [271/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:10.968 [272/743] Linking static target lib/librte_distributor.a 00:02:11.227 [273/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:11.227 [274/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.227 [275/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.227 [276/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.227 [277/743] Linking target lib/librte_compressdev.so.23.0 00:02:11.227 [278/743] Linking target lib/librte_distributor.so.23.0 00:02:11.227 [279/743] Linking target lib/librte_ethdev.so.23.0 00:02:11.227 [280/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:11.227 [281/743] Generating lib/rte_eventdev_def with a custom command 00:02:11.485 [282/743] Generating lib/rte_eventdev_mingw with a custom command 00:02:11.485 [283/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:11.485 [284/743] Linking target lib/librte_metrics.so.23.0 00:02:11.485 [285/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:11.485 [286/743] Linking target lib/librte_bitratestats.so.23.0 00:02:11.485 [287/743] Linking target lib/librte_bpf.so.23.0 00:02:11.743 [288/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:11.743 [289/743] Generating lib/rte_gpudev_def with a custom command 00:02:11.743 [290/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:11.743 [291/743] Generating lib/rte_gpudev_mingw with a custom command 00:02:12.000 [292/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:12.000 [293/743] Linking static target lib/librte_efd.a 00:02:12.000 [294/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:12.258 [295/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:12.258 [296/743] Linking static target lib/librte_cryptodev.a 00:02:12.258 [297/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.258 [298/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:12.258 [299/743] Linking target lib/librte_efd.so.23.0 00:02:12.258 [300/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:12.258 [301/743] Linking static target lib/librte_gpudev.a 00:02:12.516 [302/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:12.516 [303/743] Generating lib/rte_gro_def with a custom command 00:02:12.516 [304/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:12.516 [305/743] Generating lib/rte_gro_mingw with a custom command 00:02:12.774 [306/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:12.774 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:12.774 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:13.032 [309/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:13.032 [310/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:13.032 [311/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.032 [312/743] Linking static target lib/librte_gro.a 00:02:13.032 [313/743] Linking target lib/librte_gpudev.so.23.0 00:02:13.289 [314/743] Generating lib/rte_gso_def with a custom command 00:02:13.289 [315/743] Generating lib/rte_gso_mingw with a custom command 00:02:13.289 [316/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:13.289 [317/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:13.289 [318/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.289 [319/743] Linking target lib/librte_gro.so.23.0 00:02:13.548 [320/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:13.548 [321/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:13.548 [322/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:13.548 [323/743] Generating lib/rte_ip_frag_def with a custom command 00:02:13.548 [324/743] Generating lib/rte_ip_frag_mingw with a custom command 00:02:13.548 [325/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:13.806 [326/743] Linking static target lib/librte_eventdev.a 00:02:13.806 [327/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:13.806 [328/743] Linking static target lib/librte_gso.a 00:02:13.806 [329/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:13.806 [330/743] Linking static target lib/librte_jobstats.a 00:02:13.806 [331/743] Generating lib/rte_jobstats_def with a custom command 00:02:13.806 [332/743] Generating lib/rte_jobstats_mingw with a custom command 00:02:14.064 [333/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.064 [334/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:14.064 [335/743] Linking target lib/librte_gso.so.23.0 00:02:14.064 [336/743] Generating lib/rte_latencystats_def with a custom command 00:02:14.064 [337/743] Generating lib/rte_latencystats_mingw with a custom command 00:02:14.064 [338/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:14.064 [339/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:14.064 [340/743] Generating lib/rte_lpm_def with a custom command 00:02:14.321 [341/743] Generating lib/rte_lpm_mingw with a custom command 00:02:14.321 [342/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.322 [343/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:14.322 [344/743] Linking target lib/librte_jobstats.so.23.0 00:02:14.322 [345/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:14.322 [346/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:14.322 [347/743] Linking static target lib/librte_ip_frag.a 00:02:14.681 [348/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.681 [349/743] Linking target lib/librte_cryptodev.so.23.0 00:02:14.681 [350/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:14.681 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.681 [352/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:14.681 [353/743] Linking target lib/librte_ip_frag.so.23.0 00:02:14.681 [354/743] Linking static target lib/librte_latencystats.a 00:02:14.953 [355/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:14.953 [356/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:14.953 [357/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:14.953 [358/743] Generating lib/rte_member_def with a custom command 00:02:14.953 [359/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:14.953 [360/743] Generating lib/rte_member_mingw with a custom command 00:02:14.953 [361/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.953 [362/743] Generating lib/rte_pcapng_def with a custom command 00:02:14.953 [363/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:14.953 [364/743] Linking target lib/librte_latencystats.so.23.0 00:02:14.953 [365/743] Generating lib/rte_pcapng_mingw with a custom command 00:02:15.212 [366/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:15.212 [367/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:15.212 [368/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:15.212 [369/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:15.471 [370/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:15.471 [371/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:15.471 [372/743] Linking static target lib/librte_lpm.a 00:02:15.471 [373/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:15.471 [374/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:15.729 [375/743] Generating lib/rte_power_def with a custom command 00:02:15.729 [376/743] Generating lib/rte_power_mingw with a custom command 00:02:15.729 [377/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.729 [378/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:15.729 [379/743] Linking target lib/librte_eventdev.so.23.0 00:02:15.729 [380/743] Generating lib/rte_rawdev_def with a custom command 00:02:15.729 [381/743] Generating lib/rte_rawdev_mingw with a custom command 00:02:15.729 [382/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.988 [383/743] Linking target lib/librte_lpm.so.23.0 00:02:15.988 [384/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:15.988 [385/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:15.988 [386/743] Generating lib/rte_regexdev_mingw with a custom command 00:02:15.988 [387/743] Generating lib/rte_regexdev_def with a custom command 00:02:15.988 [388/743] Generating lib/rte_dmadev_def with a custom command 00:02:15.988 [389/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:15.988 [390/743] Linking static target lib/librte_pcapng.a 00:02:15.988 [391/743] Generating lib/rte_dmadev_mingw with a custom command 00:02:15.988 [392/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:15.988 [393/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:15.988 [394/743] Generating lib/rte_rib_def with a custom command 00:02:15.988 [395/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:15.988 [396/743] Linking static target lib/librte_rawdev.a 00:02:15.988 [397/743] Generating lib/rte_rib_mingw with a custom command 00:02:15.988 [398/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:16.247 [399/743] Generating lib/rte_reorder_def with a custom command 00:02:16.247 [400/743] Generating lib/rte_reorder_mingw with a custom command 00:02:16.247 [401/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.247 [402/743] Linking target lib/librte_pcapng.so.23.0 00:02:16.247 [403/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:16.247 [404/743] Linking static target lib/librte_dmadev.a 00:02:16.506 [405/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:16.506 [406/743] Linking static target lib/librte_power.a 00:02:16.506 [407/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:16.506 [408/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.506 [409/743] Linking target lib/librte_rawdev.so.23.0 00:02:16.506 [410/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:16.506 [411/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:16.506 [412/743] Linking static target lib/librte_regexdev.a 00:02:16.764 [413/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:16.764 [414/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:16.764 [415/743] Linking static target lib/librte_member.a 00:02:16.764 [416/743] Generating lib/rte_sched_def with a custom command 00:02:16.764 [417/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:16.764 [418/743] Generating lib/rte_sched_mingw with a custom command 00:02:16.764 [419/743] Generating lib/rte_security_def with a custom command 00:02:16.764 [420/743] Generating lib/rte_security_mingw with a custom command 00:02:16.764 [421/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.764 [422/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:17.023 [423/743] Linking target lib/librte_dmadev.so.23.0 00:02:17.023 [424/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:17.023 [425/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:17.023 [426/743] Linking static target lib/librte_reorder.a 00:02:17.023 [427/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:17.023 [428/743] Generating lib/rte_stack_def with a custom command 00:02:17.023 [429/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:17.023 [430/743] Generating lib/rte_stack_mingw with a custom command 00:02:17.023 [431/743] Linking static target lib/librte_stack.a 00:02:17.023 [432/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.023 [433/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:17.023 [434/743] Linking target lib/librte_member.so.23.0 00:02:17.281 [435/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.281 [436/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:17.281 [437/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.281 [438/743] Linking target lib/librte_reorder.so.23.0 00:02:17.281 [439/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:17.281 [440/743] Linking static target lib/librte_rib.a 00:02:17.281 [441/743] Linking target lib/librte_stack.so.23.0 00:02:17.281 [442/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.281 [443/743] Linking target lib/librte_power.so.23.0 00:02:17.281 [444/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.540 [445/743] Linking target lib/librte_regexdev.so.23.0 00:02:17.540 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:17.540 [447/743] Linking static target lib/librte_security.a 00:02:17.540 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.798 [449/743] Linking target lib/librte_rib.so.23.0 00:02:17.798 [450/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:17.798 [451/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:17.798 [452/743] Generating lib/rte_vhost_def with a custom command 00:02:17.798 [453/743] Generating lib/rte_vhost_mingw with a custom command 00:02:17.798 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:18.056 [455/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.056 [456/743] Linking target lib/librte_security.so.23.0 00:02:18.056 [457/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:18.056 [458/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:18.056 [459/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:18.315 [460/743] Linking static target lib/librte_sched.a 00:02:18.573 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.573 [462/743] Linking target lib/librte_sched.so.23.0 00:02:18.573 [463/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:18.831 [464/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:18.831 [465/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:18.831 [466/743] Generating lib/rte_ipsec_def with a custom command 00:02:18.831 [467/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:18.831 [468/743] Generating lib/rte_ipsec_mingw with a custom command 00:02:18.831 [469/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:18.831 [470/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:19.089 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:19.346 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:19.346 [473/743] Generating lib/rte_fib_def with a custom command 00:02:19.346 [474/743] Generating lib/rte_fib_mingw with a custom command 00:02:19.346 [475/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:19.346 [476/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:19.346 [477/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:19.346 [478/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:19.603 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:19.603 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:19.861 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:19.861 [482/743] Linking static target lib/librte_ipsec.a 00:02:20.119 [483/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:20.119 [484/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.377 [485/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:20.377 [486/743] Linking static target lib/librte_fib.a 00:02:20.377 [487/743] Linking target lib/librte_ipsec.so.23.0 00:02:20.377 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:20.377 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:20.377 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:20.377 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:20.635 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.635 [493/743] Linking target lib/librte_fib.so.23.0 00:02:20.892 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:21.456 [495/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:21.456 [496/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:21.456 [497/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:21.456 [498/743] Generating lib/rte_port_def with a custom command 00:02:21.456 [499/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:21.456 [500/743] Generating lib/rte_port_mingw with a custom command 00:02:21.456 [501/743] Generating lib/rte_pdump_mingw with a custom command 00:02:21.456 [502/743] Generating lib/rte_pdump_def with a custom command 00:02:21.456 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:21.713 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:21.713 [505/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:21.713 [506/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:21.713 [507/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:21.713 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:21.969 [509/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:21.969 [510/743] Linking static target lib/librte_port.a 00:02:22.226 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:22.226 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:22.483 [513/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:22.483 [514/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.483 [515/743] Linking target lib/librte_port.so.23.0 00:02:22.483 [516/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:22.483 [517/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:22.741 [518/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:22.741 [519/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:22.741 [520/743] Linking static target lib/librte_pdump.a 00:02:22.999 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.999 [522/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:22.999 [523/743] Linking target lib/librte_pdump.so.23.0 00:02:22.999 [524/743] Generating lib/rte_table_def with a custom command 00:02:22.999 [525/743] Generating lib/rte_table_mingw with a custom command 00:02:23.259 [526/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:23.259 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:23.259 [528/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:23.517 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:23.518 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:23.777 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:23.777 [532/743] Generating lib/rte_pipeline_def with a custom command 00:02:23.777 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:02:23.777 [534/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:23.777 [535/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:23.777 [536/743] Linking static target lib/librte_table.a 00:02:24.036 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:24.295 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:24.555 [539/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:24.555 [540/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.555 [541/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:24.555 [542/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:24.555 [543/743] Linking target lib/librte_table.so.23.0 00:02:24.555 [544/743] Generating lib/rte_graph_def with a custom command 00:02:24.555 [545/743] Generating lib/rte_graph_mingw with a custom command 00:02:24.555 [546/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:24.812 [547/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:24.812 [548/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:25.069 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:25.069 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:25.069 [551/743] Linking static target lib/librte_graph.a 00:02:25.325 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:25.583 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:25.583 [554/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:25.583 [555/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:25.841 [556/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:25.841 [557/743] Generating lib/rte_node_def with a custom command 00:02:25.841 [558/743] Generating lib/rte_node_mingw with a custom command 00:02:25.841 [559/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:26.099 [560/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.099 [561/743] Linking target lib/librte_graph.so.23.0 00:02:26.099 [562/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:26.099 [563/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:26.099 [564/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:26.099 [565/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:26.358 [566/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:26.358 [567/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:26.358 [568/743] Generating drivers/rte_bus_pci_def with a custom command 00:02:26.358 [569/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:26.358 [570/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:26.358 [571/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:26.358 [572/743] Generating drivers/rte_bus_vdev_def with a custom command 00:02:26.358 [573/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:26.358 [574/743] Generating drivers/rte_mempool_ring_def with a custom command 00:02:26.358 [575/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:26.358 [576/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:26.617 [577/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:26.617 [578/743] Linking static target lib/librte_node.a 00:02:26.617 [579/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:26.617 [580/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:26.617 [581/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:26.876 [582/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.876 [583/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:26.876 [584/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:26.876 [585/743] Linking target lib/librte_node.so.23.0 00:02:26.876 [586/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:26.876 [587/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:26.876 [588/743] Linking static target drivers/librte_bus_vdev.a 00:02:27.134 [589/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:27.134 [590/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:27.134 [591/743] Linking static target drivers/librte_bus_pci.a 00:02:27.134 [592/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.134 [593/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:27.134 [594/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:27.134 [595/743] Linking target drivers/librte_bus_vdev.so.23.0 00:02:27.392 [596/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:27.392 [597/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.392 [598/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:27.392 [599/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:27.392 [600/743] Linking target drivers/librte_bus_pci.so.23.0 00:02:27.392 [601/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:27.650 [602/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:27.650 [603/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:27.650 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:27.908 [605/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:27.908 [606/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:27.908 [607/743] Linking static target drivers/librte_mempool_ring.a 00:02:27.909 [608/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:27.909 [609/743] Linking target drivers/librte_mempool_ring.so.23.0 00:02:27.909 [610/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:28.475 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:28.736 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:28.736 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:28.736 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:29.382 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:29.382 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:29.382 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:29.950 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:29.950 [619/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:29.950 [620/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:30.209 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:30.209 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:02:30.209 [623/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:30.209 [624/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:30.209 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:31.145 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:31.403 [627/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:31.403 [628/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:31.662 [629/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:31.662 [630/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:31.662 [631/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:31.662 [632/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:31.662 [633/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:31.662 [634/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:31.921 [635/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:32.179 [636/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:32.439 [637/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:32.439 [638/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:32.439 [639/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:32.699 [640/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:32.957 [641/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:32.957 [642/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:32.957 [643/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:32.957 [644/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:32.957 [645/743] Linking static target drivers/librte_net_i40e.a 00:02:32.957 [646/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:32.957 [647/743] Linking static target lib/librte_vhost.a 00:02:32.957 [648/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:33.216 [649/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:33.216 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:33.474 [651/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:33.474 [652/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.733 [653/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:33.733 [654/743] Linking target drivers/librte_net_i40e.so.23.0 00:02:33.733 [655/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:33.992 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:33.992 [657/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:34.250 [658/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.250 [659/743] Linking target lib/librte_vhost.so.23.0 00:02:34.508 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:34.509 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:34.509 [662/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:34.509 [663/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:34.509 [664/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:34.509 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:34.767 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:34.767 [667/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:35.025 [668/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:35.025 [669/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:35.284 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:35.543 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:35.543 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:35.543 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:36.110 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:36.110 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:36.368 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:36.368 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:36.368 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:36.628 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:36.628 [680/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:36.628 [681/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:36.886 [682/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:37.145 [683/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:37.145 [684/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:37.145 [685/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:37.145 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:37.404 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:37.404 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:37.662 [689/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:37.662 [690/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:37.662 [691/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:37.662 [692/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:37.921 [693/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:37.921 [694/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:38.180 [695/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:38.180 [696/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:38.438 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:38.697 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:38.697 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:38.955 [700/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:38.955 [701/743] Linking static target lib/librte_pipeline.a 00:02:39.213 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:39.213 [703/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:39.213 [704/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:39.472 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:39.730 [706/743] Linking target app/dpdk-dumpcap 00:02:39.730 [707/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:39.730 [708/743] Linking target app/dpdk-pdump 00:02:39.730 [709/743] Linking target app/dpdk-proc-info 00:02:39.730 [710/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:39.990 [711/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:39.990 [712/743] Linking target app/dpdk-test-acl 00:02:39.990 [713/743] Linking target app/dpdk-test-bbdev 00:02:39.990 [714/743] Linking target app/dpdk-test-cmdline 00:02:40.248 [715/743] Linking target app/dpdk-test-compress-perf 00:02:40.248 [716/743] Linking target app/dpdk-test-crypto-perf 00:02:40.248 [717/743] Linking target app/dpdk-test-eventdev 00:02:40.248 [718/743] Linking target app/dpdk-test-fib 00:02:40.507 [719/743] Linking target app/dpdk-test-flow-perf 00:02:40.507 [720/743] Linking target app/dpdk-test-gpudev 00:02:40.507 [721/743] Linking target app/dpdk-test-pipeline 00:02:41.074 [722/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:41.074 [723/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:41.074 [724/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:41.331 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:41.331 [726/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:41.331 [727/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:41.589 [728/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.589 [729/743] Linking target lib/librte_pipeline.so.23.0 00:02:41.847 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:42.104 [731/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:42.104 [732/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:42.104 [733/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:42.104 [734/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:42.104 [735/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:42.362 [736/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:42.620 [737/743] Linking target app/dpdk-test-sad 00:02:42.620 [738/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:42.620 [739/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:42.878 [740/743] Linking target app/dpdk-test-regex 00:02:42.878 [741/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:43.136 [742/743] Linking target app/dpdk-testpmd 00:02:43.393 [743/743] Linking target app/dpdk-test-security-perf 00:02:43.393 05:49:34 build_native_dpdk -- common/autobuild_common.sh@188 -- $ uname -s 00:02:43.393 05:49:35 build_native_dpdk -- common/autobuild_common.sh@188 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:43.393 05:49:35 build_native_dpdk -- common/autobuild_common.sh@201 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:43.393 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:43.393 [0/1] Installing files. 00:02:43.652 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:43.652 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.653 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:43.654 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:43.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:43.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:43.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:43.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:43.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:43.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:43.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:43.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:43.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:43.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:43.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:43.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:43.914 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:43.915 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:43.916 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:43.916 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.916 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:43.917 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:43.917 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:43.917 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.917 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:43.917 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.917 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.917 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.917 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.917 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.917 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.917 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.917 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.917 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:44.178 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:44.178 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:44.178 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:44.178 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:44.178 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:44.178 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:44.178 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:44.178 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.178 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.179 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.180 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:44.181 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:44.182 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:44.182 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:44.182 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:02:44.182 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:44.182 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:02:44.182 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:44.182 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:02:44.182 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:44.182 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:02:44.182 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:44.182 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:02:44.182 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:44.182 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:02:44.182 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:44.182 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:02:44.182 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:44.182 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:02:44.182 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:44.182 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:02:44.182 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:44.182 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:02:44.182 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:44.182 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:02:44.182 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:44.182 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:02:44.182 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:44.182 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:02:44.182 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:44.182 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:02:44.182 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:44.182 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:02:44.182 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:44.182 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:02:44.182 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:44.182 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:02:44.182 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:44.182 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:02:44.182 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:44.182 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:02:44.182 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:44.182 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:02:44.182 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:44.182 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:02:44.182 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:44.182 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:02:44.182 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:44.182 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:02:44.182 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:44.182 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:02:44.182 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:44.182 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:02:44.182 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:44.182 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:02:44.182 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:44.182 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:44.182 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:44.182 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:44.182 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:44.182 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:44.182 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:44.182 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:44.182 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:44.182 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:44.182 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:44.182 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:44.182 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:44.182 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:02:44.182 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:44.182 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:02:44.182 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:44.182 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:02:44.182 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:44.182 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:02:44.182 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:44.182 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:02:44.182 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:44.182 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:02:44.182 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:44.182 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:02:44.182 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:44.182 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:02:44.182 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:44.182 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:02:44.183 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:44.183 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:02:44.183 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:44.183 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:02:44.183 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:44.183 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:02:44.183 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:44.183 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:02:44.183 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:44.183 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:02:44.183 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:44.183 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:02:44.183 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:44.183 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:02:44.183 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:44.183 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:02:44.183 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:44.183 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:02:44.183 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:44.183 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:02:44.183 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:44.183 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:02:44.183 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:44.183 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:02:44.183 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:44.183 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:02:44.183 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:44.183 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:02:44.183 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:44.183 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:02:44.183 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:44.183 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:02:44.183 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:44.183 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:02:44.183 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:44.183 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:44.183 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:44.183 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:44.183 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:44.183 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:44.183 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:44.183 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:44.183 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:44.183 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:44.183 05:49:35 build_native_dpdk -- common/autobuild_common.sh@207 -- $ cat 00:02:44.183 05:49:35 build_native_dpdk -- common/autobuild_common.sh@212 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:44.183 00:02:44.183 real 0m50.603s 00:02:44.183 user 6m4.500s 00:02:44.183 sys 0m53.973s 00:02:44.183 05:49:35 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:44.183 ************************************ 00:02:44.183 END TEST build_native_dpdk 00:02:44.183 ************************************ 00:02:44.183 05:49:35 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:44.183 05:49:35 -- common/autotest_common.sh@1142 -- $ return 0 00:02:44.183 05:49:35 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:44.183 05:49:35 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:44.183 05:49:35 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:44.183 05:49:35 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:44.183 05:49:35 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:44.183 05:49:35 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:44.183 05:49:35 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:44.183 05:49:35 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:02:44.442 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:44.442 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:44.442 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:44.442 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:45.007 Using 'verbs' RDMA provider 00:02:58.171 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:13.041 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:13.041 Creating mk/config.mk...done. 00:03:13.041 Creating mk/cc.flags.mk...done. 00:03:13.041 Type 'make' to build. 00:03:13.041 05:50:02 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:13.041 05:50:02 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:13.041 05:50:02 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:13.041 05:50:02 -- common/autotest_common.sh@10 -- $ set +x 00:03:13.041 ************************************ 00:03:13.041 START TEST make 00:03:13.041 ************************************ 00:03:13.041 05:50:02 make -- common/autotest_common.sh@1123 -- $ make -j10 00:03:13.041 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:13.041 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:13.041 meson setup builddir \ 00:03:13.041 -Dwith-libaio=enabled \ 00:03:13.041 -Dwith-liburing=enabled \ 00:03:13.041 -Dwith-libvfn=disabled \ 00:03:13.041 -Dwith-spdk=false && \ 00:03:13.041 meson compile -C builddir && \ 00:03:13.041 cd -) 00:03:13.041 make[1]: Nothing to be done for 'all'. 00:03:13.975 The Meson build system 00:03:13.975 Version: 1.3.1 00:03:13.975 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:13.975 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:13.975 Build type: native build 00:03:13.975 Project name: xnvme 00:03:13.975 Project version: 0.7.3 00:03:13.975 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:13.975 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:13.975 Host machine cpu family: x86_64 00:03:13.975 Host machine cpu: x86_64 00:03:13.975 Message: host_machine.system: linux 00:03:13.975 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:13.975 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:13.975 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:13.975 Run-time dependency threads found: YES 00:03:13.975 Has header "setupapi.h" : NO 00:03:13.975 Has header "linux/blkzoned.h" : YES 00:03:13.975 Has header "linux/blkzoned.h" : YES (cached) 00:03:13.975 Has header "libaio.h" : YES 00:03:13.975 Library aio found: YES 00:03:13.975 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:13.975 Run-time dependency liburing found: YES 2.2 00:03:13.975 Dependency libvfn skipped: feature with-libvfn disabled 00:03:13.975 Run-time dependency appleframeworks found: NO (tried framework) 00:03:13.975 Run-time dependency appleframeworks found: NO (tried framework) 00:03:13.975 Configuring xnvme_config.h using configuration 00:03:13.975 Configuring xnvme.spec using configuration 00:03:13.975 Run-time dependency bash-completion found: YES 2.11 00:03:13.975 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:13.975 Program cp found: YES (/usr/bin/cp) 00:03:13.975 Has header "winsock2.h" : NO 00:03:13.975 Has header "dbghelp.h" : NO 00:03:13.975 Library rpcrt4 found: NO 00:03:13.975 Library rt found: YES 00:03:13.975 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:13.975 Found CMake: /usr/bin/cmake (3.27.7) 00:03:13.975 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:13.975 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:13.975 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:13.975 Build targets in project: 32 00:03:13.975 00:03:13.975 xnvme 0.7.3 00:03:13.975 00:03:13.975 User defined options 00:03:13.975 with-libaio : enabled 00:03:13.975 with-liburing: enabled 00:03:13.975 with-libvfn : disabled 00:03:13.975 with-spdk : false 00:03:13.975 00:03:13.975 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:14.542 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:14.542 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:14.542 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:14.542 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:14.542 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:14.542 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:14.542 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:14.542 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:14.542 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:14.542 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:14.542 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:14.542 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:14.542 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:14.801 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:14.801 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:14.801 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:14.801 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:14.801 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:14.801 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:14.801 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:14.801 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:14.801 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:14.801 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:14.801 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:14.801 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:14.801 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:14.801 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:14.801 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:14.801 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:14.801 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:14.801 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:14.801 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:14.801 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:15.059 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:15.059 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:15.059 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:15.059 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:15.059 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:15.059 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:15.059 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:15.059 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:15.059 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:15.059 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:15.059 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:15.059 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:15.059 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:15.060 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:15.060 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:15.060 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:15.060 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:15.060 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:15.060 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:15.060 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:15.060 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:15.060 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:15.060 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:15.060 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:15.060 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:15.060 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:15.060 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:15.318 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:15.318 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:15.318 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:15.318 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:15.318 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:15.318 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:15.318 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:15.318 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:15.318 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:15.318 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:15.318 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:15.318 [71/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:15.318 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:15.318 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:15.577 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:15.577 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:15.577 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:15.577 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:15.577 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:15.577 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:15.577 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:15.577 [81/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:15.577 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:15.577 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:15.577 [84/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:15.577 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:15.577 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:15.577 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:15.836 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:15.836 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:15.836 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:15.836 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:15.836 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:15.836 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:15.836 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:15.836 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:15.836 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:15.836 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:15.836 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:15.836 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:15.836 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:15.836 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:15.836 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:15.836 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:15.836 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:15.836 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:15.836 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:15.836 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:15.836 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:15.836 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:15.836 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:15.836 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:15.836 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:15.836 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:15.836 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:15.836 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:15.836 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:16.095 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:16.095 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:16.095 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:16.095 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:16.095 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:16.095 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:16.095 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:16.095 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:16.095 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:16.095 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:16.095 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:16.095 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:16.095 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:16.095 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:16.095 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:16.095 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:16.095 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:16.095 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:16.353 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:16.353 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:16.353 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:16.353 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:16.353 [139/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:16.353 [140/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:16.353 [141/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:16.353 [142/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:16.353 [143/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:16.353 [144/203] Linking target lib/libxnvme.so 00:03:16.353 [145/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:16.353 [146/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:16.353 [147/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:16.612 [148/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:16.612 [149/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:16.612 [150/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:16.612 [151/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:16.612 [152/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:16.612 [153/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:16.612 [154/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:16.612 [155/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:16.612 [156/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:16.612 [157/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:16.612 [158/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:16.612 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:16.612 [160/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:16.612 [161/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:16.870 [162/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:16.870 [163/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:16.870 [164/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:16.870 [165/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:16.870 [166/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:16.870 [167/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:16.870 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:16.870 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:16.870 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:17.128 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:17.128 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:17.128 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:17.128 [174/203] Linking static target lib/libxnvme.a 00:03:17.128 [175/203] Linking target tests/xnvme_tests_buf 00:03:17.128 [176/203] Linking target tests/xnvme_tests_cli 00:03:17.387 [177/203] Linking target tests/xnvme_tests_async_intf 00:03:17.387 [178/203] Linking target tests/xnvme_tests_znd_append 00:03:17.387 [179/203] Linking target tests/xnvme_tests_enum 00:03:17.387 [180/203] Linking target tests/xnvme_tests_scc 00:03:17.387 [181/203] Linking target tests/xnvme_tests_ioworker 00:03:17.387 [182/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:17.387 [183/203] Linking target tests/xnvme_tests_lblk 00:03:17.387 [184/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:17.387 [185/203] Linking target tests/xnvme_tests_kvs 00:03:17.387 [186/203] Linking target tests/xnvme_tests_xnvme_file 00:03:17.387 [187/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:17.387 [188/203] Linking target tests/xnvme_tests_map 00:03:17.387 [189/203] Linking target tests/xnvme_tests_znd_state 00:03:17.387 [190/203] Linking target tools/lblk 00:03:17.387 [191/203] Linking target tools/zoned 00:03:17.387 [192/203] Linking target examples/xnvme_enum 00:03:17.387 [193/203] Linking target tools/kvs 00:03:17.387 [194/203] Linking target tools/xdd 00:03:17.387 [195/203] Linking target examples/xnvme_dev 00:03:17.387 [196/203] Linking target tools/xnvme_file 00:03:17.387 [197/203] Linking target tools/xnvme 00:03:17.387 [198/203] Linking target examples/xnvme_single_sync 00:03:17.387 [199/203] Linking target examples/xnvme_io_async 00:03:17.387 [200/203] Linking target examples/xnvme_hello 00:03:17.387 [201/203] Linking target examples/xnvme_single_async 00:03:17.387 [202/203] Linking target examples/zoned_io_async 00:03:17.387 [203/203] Linking target examples/zoned_io_sync 00:03:17.387 INFO: autodetecting backend as ninja 00:03:17.387 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:17.387 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:39.307 CC lib/ut_mock/mock.o 00:03:39.307 CC lib/ut/ut.o 00:03:39.307 CC lib/log/log.o 00:03:39.307 CC lib/log/log_flags.o 00:03:39.307 CC lib/log/log_deprecated.o 00:03:39.307 LIB libspdk_log.a 00:03:39.307 LIB libspdk_ut.a 00:03:39.307 LIB libspdk_ut_mock.a 00:03:39.307 SO libspdk_ut.so.2.0 00:03:39.307 SO libspdk_ut_mock.so.6.0 00:03:39.307 SO libspdk_log.so.7.0 00:03:39.307 SYMLINK libspdk_ut.so 00:03:39.307 SYMLINK libspdk_ut_mock.so 00:03:39.307 SYMLINK libspdk_log.so 00:03:39.307 CC lib/dma/dma.o 00:03:39.307 CC lib/util/base64.o 00:03:39.307 CC lib/util/bit_array.o 00:03:39.307 CC lib/util/cpuset.o 00:03:39.307 CC lib/util/crc16.o 00:03:39.307 CC lib/util/crc32.o 00:03:39.307 CC lib/util/crc32c.o 00:03:39.307 CXX lib/trace_parser/trace.o 00:03:39.307 CC lib/ioat/ioat.o 00:03:39.307 CC lib/vfio_user/host/vfio_user_pci.o 00:03:39.307 CC lib/util/crc32_ieee.o 00:03:39.307 CC lib/vfio_user/host/vfio_user.o 00:03:39.307 CC lib/util/crc64.o 00:03:39.307 CC lib/util/dif.o 00:03:39.307 LIB libspdk_dma.a 00:03:39.307 CC lib/util/fd.o 00:03:39.307 SO libspdk_dma.so.4.0 00:03:39.307 CC lib/util/file.o 00:03:39.307 CC lib/util/hexlify.o 00:03:39.307 CC lib/util/iov.o 00:03:39.307 SYMLINK libspdk_dma.so 00:03:39.307 CC lib/util/math.o 00:03:39.307 LIB libspdk_ioat.a 00:03:39.307 SO libspdk_ioat.so.7.0 00:03:39.307 CC lib/util/pipe.o 00:03:39.307 CC lib/util/strerror_tls.o 00:03:39.307 LIB libspdk_vfio_user.a 00:03:39.307 CC lib/util/string.o 00:03:39.307 SYMLINK libspdk_ioat.so 00:03:39.307 CC lib/util/uuid.o 00:03:39.307 SO libspdk_vfio_user.so.5.0 00:03:39.307 CC lib/util/fd_group.o 00:03:39.307 CC lib/util/xor.o 00:03:39.307 CC lib/util/zipf.o 00:03:39.307 SYMLINK libspdk_vfio_user.so 00:03:39.307 LIB libspdk_util.a 00:03:39.307 SO libspdk_util.so.9.1 00:03:39.307 LIB libspdk_trace_parser.a 00:03:39.307 SO libspdk_trace_parser.so.5.0 00:03:39.307 SYMLINK libspdk_util.so 00:03:39.307 SYMLINK libspdk_trace_parser.so 00:03:39.307 CC lib/vmd/vmd.o 00:03:39.307 CC lib/vmd/led.o 00:03:39.307 CC lib/idxd/idxd.o 00:03:39.307 CC lib/json/json_parse.o 00:03:39.307 CC lib/idxd/idxd_user.o 00:03:39.307 CC lib/idxd/idxd_kernel.o 00:03:39.307 CC lib/conf/conf.o 00:03:39.307 CC lib/rdma_provider/common.o 00:03:39.307 CC lib/rdma_utils/rdma_utils.o 00:03:39.307 CC lib/env_dpdk/env.o 00:03:39.307 CC lib/env_dpdk/memory.o 00:03:39.307 CC lib/json/json_util.o 00:03:39.307 LIB libspdk_conf.a 00:03:39.307 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:39.307 CC lib/json/json_write.o 00:03:39.307 SO libspdk_conf.so.6.0 00:03:39.307 CC lib/env_dpdk/pci.o 00:03:39.307 LIB libspdk_rdma_utils.a 00:03:39.307 SYMLINK libspdk_conf.so 00:03:39.307 SO libspdk_rdma_utils.so.1.0 00:03:39.307 CC lib/env_dpdk/init.o 00:03:39.307 SYMLINK libspdk_rdma_utils.so 00:03:39.307 CC lib/env_dpdk/threads.o 00:03:39.307 CC lib/env_dpdk/pci_ioat.o 00:03:39.307 LIB libspdk_rdma_provider.a 00:03:39.307 SO libspdk_rdma_provider.so.6.0 00:03:39.307 CC lib/env_dpdk/pci_virtio.o 00:03:39.307 CC lib/env_dpdk/pci_vmd.o 00:03:39.307 SYMLINK libspdk_rdma_provider.so 00:03:39.307 CC lib/env_dpdk/pci_idxd.o 00:03:39.307 LIB libspdk_json.a 00:03:39.307 CC lib/env_dpdk/pci_event.o 00:03:39.307 SO libspdk_json.so.6.0 00:03:39.307 CC lib/env_dpdk/sigbus_handler.o 00:03:39.307 CC lib/env_dpdk/pci_dpdk.o 00:03:39.307 LIB libspdk_idxd.a 00:03:39.307 SYMLINK libspdk_json.so 00:03:39.307 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:39.307 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:39.307 SO libspdk_idxd.so.12.0 00:03:39.307 LIB libspdk_vmd.a 00:03:39.307 SYMLINK libspdk_idxd.so 00:03:39.307 SO libspdk_vmd.so.6.0 00:03:39.307 SYMLINK libspdk_vmd.so 00:03:39.307 CC lib/jsonrpc/jsonrpc_server.o 00:03:39.307 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:39.307 CC lib/jsonrpc/jsonrpc_client.o 00:03:39.307 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:39.307 LIB libspdk_jsonrpc.a 00:03:39.307 SO libspdk_jsonrpc.so.6.0 00:03:39.565 SYMLINK libspdk_jsonrpc.so 00:03:39.823 CC lib/rpc/rpc.o 00:03:39.823 LIB libspdk_env_dpdk.a 00:03:40.081 LIB libspdk_rpc.a 00:03:40.081 SO libspdk_rpc.so.6.0 00:03:40.081 SO libspdk_env_dpdk.so.14.1 00:03:40.081 SYMLINK libspdk_rpc.so 00:03:40.338 SYMLINK libspdk_env_dpdk.so 00:03:40.338 CC lib/notify/notify.o 00:03:40.338 CC lib/notify/notify_rpc.o 00:03:40.338 CC lib/keyring/keyring.o 00:03:40.338 CC lib/keyring/keyring_rpc.o 00:03:40.338 CC lib/trace/trace.o 00:03:40.338 CC lib/trace/trace_flags.o 00:03:40.338 CC lib/trace/trace_rpc.o 00:03:40.596 LIB libspdk_notify.a 00:03:40.596 SO libspdk_notify.so.6.0 00:03:40.596 SYMLINK libspdk_notify.so 00:03:40.596 LIB libspdk_keyring.a 00:03:40.596 LIB libspdk_trace.a 00:03:40.854 SO libspdk_keyring.so.1.0 00:03:40.854 SO libspdk_trace.so.10.0 00:03:40.854 SYMLINK libspdk_keyring.so 00:03:40.854 SYMLINK libspdk_trace.so 00:03:41.112 CC lib/sock/sock.o 00:03:41.112 CC lib/sock/sock_rpc.o 00:03:41.112 CC lib/thread/iobuf.o 00:03:41.112 CC lib/thread/thread.o 00:03:41.678 LIB libspdk_sock.a 00:03:41.678 SO libspdk_sock.so.10.0 00:03:41.678 SYMLINK libspdk_sock.so 00:03:41.934 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:41.934 CC lib/nvme/nvme_ctrlr.o 00:03:41.934 CC lib/nvme/nvme_fabric.o 00:03:41.934 CC lib/nvme/nvme_ns.o 00:03:41.934 CC lib/nvme/nvme_ns_cmd.o 00:03:41.934 CC lib/nvme/nvme_pcie_common.o 00:03:41.934 CC lib/nvme/nvme_pcie.o 00:03:41.934 CC lib/nvme/nvme.o 00:03:41.934 CC lib/nvme/nvme_qpair.o 00:03:42.868 CC lib/nvme/nvme_quirks.o 00:03:42.868 CC lib/nvme/nvme_transport.o 00:03:42.868 CC lib/nvme/nvme_discovery.o 00:03:43.126 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:43.126 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:43.126 LIB libspdk_thread.a 00:03:43.126 CC lib/nvme/nvme_tcp.o 00:03:43.126 SO libspdk_thread.so.10.1 00:03:43.126 CC lib/nvme/nvme_opal.o 00:03:43.126 SYMLINK libspdk_thread.so 00:03:43.126 CC lib/nvme/nvme_io_msg.o 00:03:43.384 CC lib/nvme/nvme_poll_group.o 00:03:43.384 CC lib/nvme/nvme_zns.o 00:03:43.649 CC lib/nvme/nvme_stubs.o 00:03:43.649 CC lib/nvme/nvme_auth.o 00:03:43.649 CC lib/nvme/nvme_cuse.o 00:03:43.908 CC lib/accel/accel.o 00:03:43.908 CC lib/accel/accel_rpc.o 00:03:43.908 CC lib/accel/accel_sw.o 00:03:43.908 CC lib/nvme/nvme_rdma.o 00:03:44.166 CC lib/blob/blobstore.o 00:03:44.166 CC lib/init/json_config.o 00:03:44.166 CC lib/blob/request.o 00:03:44.166 CC lib/virtio/virtio.o 00:03:44.424 CC lib/init/subsystem.o 00:03:44.682 CC lib/blob/zeroes.o 00:03:44.682 CC lib/virtio/virtio_vhost_user.o 00:03:44.682 CC lib/init/subsystem_rpc.o 00:03:44.682 CC lib/virtio/virtio_vfio_user.o 00:03:44.682 CC lib/virtio/virtio_pci.o 00:03:44.682 CC lib/init/rpc.o 00:03:44.940 CC lib/blob/blob_bs_dev.o 00:03:44.940 LIB libspdk_init.a 00:03:44.940 SO libspdk_init.so.5.0 00:03:45.199 LIB libspdk_accel.a 00:03:45.199 LIB libspdk_virtio.a 00:03:45.199 SYMLINK libspdk_init.so 00:03:45.199 SO libspdk_accel.so.15.1 00:03:45.199 SO libspdk_virtio.so.7.0 00:03:45.199 SYMLINK libspdk_accel.so 00:03:45.199 SYMLINK libspdk_virtio.so 00:03:45.458 CC lib/event/app.o 00:03:45.458 CC lib/event/reactor.o 00:03:45.458 CC lib/event/log_rpc.o 00:03:45.458 CC lib/event/app_rpc.o 00:03:45.458 CC lib/event/scheduler_static.o 00:03:45.458 CC lib/bdev/bdev.o 00:03:45.458 CC lib/bdev/bdev_rpc.o 00:03:45.458 CC lib/bdev/bdev_zone.o 00:03:45.458 CC lib/bdev/part.o 00:03:45.458 CC lib/bdev/scsi_nvme.o 00:03:45.715 LIB libspdk_nvme.a 00:03:45.974 SO libspdk_nvme.so.13.1 00:03:45.974 LIB libspdk_event.a 00:03:45.974 SO libspdk_event.so.14.0 00:03:46.232 SYMLINK libspdk_event.so 00:03:46.232 SYMLINK libspdk_nvme.so 00:03:48.135 LIB libspdk_blob.a 00:03:48.394 SO libspdk_blob.so.11.0 00:03:48.394 SYMLINK libspdk_blob.so 00:03:48.652 LIB libspdk_bdev.a 00:03:48.652 SO libspdk_bdev.so.15.1 00:03:48.652 CC lib/lvol/lvol.o 00:03:48.652 CC lib/blobfs/blobfs.o 00:03:48.652 CC lib/blobfs/tree.o 00:03:48.910 SYMLINK libspdk_bdev.so 00:03:48.910 CC lib/scsi/dev.o 00:03:48.910 CC lib/scsi/lun.o 00:03:48.910 CC lib/scsi/port.o 00:03:48.910 CC lib/scsi/scsi.o 00:03:48.910 CC lib/nvmf/ctrlr.o 00:03:48.910 CC lib/ftl/ftl_core.o 00:03:48.910 CC lib/nbd/nbd.o 00:03:48.910 CC lib/ublk/ublk.o 00:03:49.167 CC lib/nbd/nbd_rpc.o 00:03:49.167 CC lib/scsi/scsi_bdev.o 00:03:49.167 CC lib/nvmf/ctrlr_discovery.o 00:03:49.425 CC lib/ftl/ftl_init.o 00:03:49.425 CC lib/ftl/ftl_layout.o 00:03:49.425 CC lib/ftl/ftl_debug.o 00:03:49.684 LIB libspdk_nbd.a 00:03:49.684 SO libspdk_nbd.so.7.0 00:03:49.684 CC lib/ftl/ftl_io.o 00:03:49.684 SYMLINK libspdk_nbd.so 00:03:49.684 CC lib/ublk/ublk_rpc.o 00:03:49.942 CC lib/ftl/ftl_sb.o 00:03:49.942 CC lib/ftl/ftl_l2p.o 00:03:49.942 CC lib/scsi/scsi_pr.o 00:03:49.942 LIB libspdk_blobfs.a 00:03:49.942 CC lib/nvmf/ctrlr_bdev.o 00:03:49.942 CC lib/ftl/ftl_l2p_flat.o 00:03:49.942 SO libspdk_blobfs.so.10.0 00:03:49.942 LIB libspdk_ublk.a 00:03:49.942 SO libspdk_ublk.so.3.0 00:03:49.942 LIB libspdk_lvol.a 00:03:49.942 SYMLINK libspdk_blobfs.so 00:03:49.942 CC lib/ftl/ftl_nv_cache.o 00:03:49.942 CC lib/nvmf/subsystem.o 00:03:49.942 SO libspdk_lvol.so.10.0 00:03:49.942 SYMLINK libspdk_ublk.so 00:03:49.942 CC lib/nvmf/nvmf.o 00:03:49.942 CC lib/ftl/ftl_band.o 00:03:50.200 SYMLINK libspdk_lvol.so 00:03:50.200 CC lib/nvmf/nvmf_rpc.o 00:03:50.200 CC lib/ftl/ftl_band_ops.o 00:03:50.200 CC lib/ftl/ftl_writer.o 00:03:50.200 CC lib/scsi/scsi_rpc.o 00:03:50.457 CC lib/ftl/ftl_rq.o 00:03:50.457 CC lib/scsi/task.o 00:03:50.457 CC lib/ftl/ftl_reloc.o 00:03:50.729 CC lib/ftl/ftl_l2p_cache.o 00:03:50.729 CC lib/ftl/ftl_p2l.o 00:03:50.729 LIB libspdk_scsi.a 00:03:50.729 SO libspdk_scsi.so.9.0 00:03:50.729 CC lib/nvmf/transport.o 00:03:51.011 SYMLINK libspdk_scsi.so 00:03:51.011 CC lib/ftl/mngt/ftl_mngt.o 00:03:51.011 CC lib/nvmf/tcp.o 00:03:51.011 CC lib/nvmf/stubs.o 00:03:51.011 CC lib/nvmf/mdns_server.o 00:03:51.279 CC lib/nvmf/rdma.o 00:03:51.279 CC lib/nvmf/auth.o 00:03:51.279 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:51.279 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:51.279 CC lib/iscsi/conn.o 00:03:51.537 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:51.537 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:51.537 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:51.537 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:51.537 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:51.537 CC lib/vhost/vhost.o 00:03:51.795 CC lib/iscsi/init_grp.o 00:03:51.795 CC lib/vhost/vhost_rpc.o 00:03:51.795 CC lib/vhost/vhost_scsi.o 00:03:51.795 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:51.795 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:52.053 CC lib/iscsi/iscsi.o 00:03:52.053 CC lib/iscsi/md5.o 00:03:52.053 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:52.311 CC lib/vhost/vhost_blk.o 00:03:52.311 CC lib/vhost/rte_vhost_user.o 00:03:52.311 CC lib/iscsi/param.o 00:03:52.311 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:52.311 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:52.569 CC lib/iscsi/portal_grp.o 00:03:52.569 CC lib/ftl/utils/ftl_conf.o 00:03:52.829 CC lib/iscsi/tgt_node.o 00:03:52.829 CC lib/ftl/utils/ftl_md.o 00:03:52.829 CC lib/iscsi/iscsi_subsystem.o 00:03:52.829 CC lib/iscsi/iscsi_rpc.o 00:03:52.829 CC lib/iscsi/task.o 00:03:52.829 CC lib/ftl/utils/ftl_mempool.o 00:03:53.087 CC lib/ftl/utils/ftl_bitmap.o 00:03:53.087 CC lib/ftl/utils/ftl_property.o 00:03:53.346 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:53.346 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:53.346 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:53.346 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:53.346 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:53.346 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:53.346 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:53.346 LIB libspdk_vhost.a 00:03:53.605 SO libspdk_vhost.so.8.0 00:03:53.605 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:53.605 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:53.605 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:53.605 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:53.605 CC lib/ftl/base/ftl_base_dev.o 00:03:53.605 CC lib/ftl/base/ftl_base_bdev.o 00:03:53.605 CC lib/ftl/ftl_trace.o 00:03:53.605 SYMLINK libspdk_vhost.so 00:03:53.865 LIB libspdk_iscsi.a 00:03:53.865 LIB libspdk_nvmf.a 00:03:53.865 SO libspdk_iscsi.so.8.0 00:03:54.125 LIB libspdk_ftl.a 00:03:54.125 SO libspdk_nvmf.so.18.1 00:03:54.125 SYMLINK libspdk_iscsi.so 00:03:54.383 SO libspdk_ftl.so.9.0 00:03:54.383 SYMLINK libspdk_nvmf.so 00:03:54.642 SYMLINK libspdk_ftl.so 00:03:54.901 CC module/env_dpdk/env_dpdk_rpc.o 00:03:55.159 CC module/sock/posix/posix.o 00:03:55.159 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:55.159 CC module/keyring/file/keyring.o 00:03:55.159 CC module/accel/error/accel_error.o 00:03:55.159 CC module/scheduler/gscheduler/gscheduler.o 00:03:55.159 CC module/blob/bdev/blob_bdev.o 00:03:55.159 CC module/keyring/linux/keyring.o 00:03:55.159 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:55.159 CC module/accel/ioat/accel_ioat.o 00:03:55.159 LIB libspdk_env_dpdk_rpc.a 00:03:55.159 SO libspdk_env_dpdk_rpc.so.6.0 00:03:55.159 CC module/keyring/file/keyring_rpc.o 00:03:55.159 CC module/keyring/linux/keyring_rpc.o 00:03:55.159 LIB libspdk_scheduler_gscheduler.a 00:03:55.159 SYMLINK libspdk_env_dpdk_rpc.so 00:03:55.159 CC module/accel/error/accel_error_rpc.o 00:03:55.418 SO libspdk_scheduler_gscheduler.so.4.0 00:03:55.418 LIB libspdk_scheduler_dpdk_governor.a 00:03:55.418 LIB libspdk_scheduler_dynamic.a 00:03:55.418 CC module/accel/ioat/accel_ioat_rpc.o 00:03:55.418 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:55.418 SO libspdk_scheduler_dynamic.so.4.0 00:03:55.418 SYMLINK libspdk_scheduler_gscheduler.so 00:03:55.418 LIB libspdk_keyring_linux.a 00:03:55.418 LIB libspdk_blob_bdev.a 00:03:55.418 LIB libspdk_keyring_file.a 00:03:55.418 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:55.418 SYMLINK libspdk_scheduler_dynamic.so 00:03:55.418 SO libspdk_keyring_linux.so.1.0 00:03:55.418 SO libspdk_blob_bdev.so.11.0 00:03:55.418 SO libspdk_keyring_file.so.1.0 00:03:55.418 LIB libspdk_accel_error.a 00:03:55.418 CC module/accel/dsa/accel_dsa.o 00:03:55.418 CC module/accel/dsa/accel_dsa_rpc.o 00:03:55.418 LIB libspdk_accel_ioat.a 00:03:55.418 SYMLINK libspdk_blob_bdev.so 00:03:55.418 SYMLINK libspdk_keyring_linux.so 00:03:55.418 SYMLINK libspdk_keyring_file.so 00:03:55.418 SO libspdk_accel_error.so.2.0 00:03:55.418 SO libspdk_accel_ioat.so.6.0 00:03:55.676 SYMLINK libspdk_accel_error.so 00:03:55.676 CC module/accel/iaa/accel_iaa.o 00:03:55.676 CC module/accel/iaa/accel_iaa_rpc.o 00:03:55.676 SYMLINK libspdk_accel_ioat.so 00:03:55.677 CC module/bdev/error/vbdev_error.o 00:03:55.677 CC module/bdev/delay/vbdev_delay.o 00:03:55.677 LIB libspdk_accel_dsa.a 00:03:55.677 CC module/blobfs/bdev/blobfs_bdev.o 00:03:55.677 CC module/bdev/lvol/vbdev_lvol.o 00:03:55.677 CC module/bdev/gpt/gpt.o 00:03:55.935 LIB libspdk_accel_iaa.a 00:03:55.935 CC module/bdev/malloc/bdev_malloc.o 00:03:55.935 SO libspdk_accel_dsa.so.5.0 00:03:55.935 SO libspdk_accel_iaa.so.3.0 00:03:55.935 CC module/bdev/null/bdev_null.o 00:03:55.935 SYMLINK libspdk_accel_dsa.so 00:03:55.935 CC module/bdev/null/bdev_null_rpc.o 00:03:55.935 SYMLINK libspdk_accel_iaa.so 00:03:55.935 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:55.935 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:55.935 CC module/bdev/gpt/vbdev_gpt.o 00:03:55.935 LIB libspdk_sock_posix.a 00:03:56.193 CC module/bdev/error/vbdev_error_rpc.o 00:03:56.193 SO libspdk_sock_posix.so.6.0 00:03:56.193 LIB libspdk_blobfs_bdev.a 00:03:56.193 SYMLINK libspdk_sock_posix.so 00:03:56.193 SO libspdk_blobfs_bdev.so.6.0 00:03:56.193 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:56.193 LIB libspdk_bdev_null.a 00:03:56.193 SO libspdk_bdev_null.so.6.0 00:03:56.193 LIB libspdk_bdev_malloc.a 00:03:56.193 LIB libspdk_bdev_error.a 00:03:56.193 CC module/bdev/nvme/bdev_nvme.o 00:03:56.193 CC module/bdev/passthru/vbdev_passthru.o 00:03:56.193 SO libspdk_bdev_malloc.so.6.0 00:03:56.193 SO libspdk_bdev_error.so.6.0 00:03:56.193 SYMLINK libspdk_blobfs_bdev.so 00:03:56.451 LIB libspdk_bdev_gpt.a 00:03:56.451 SYMLINK libspdk_bdev_null.so 00:03:56.451 SO libspdk_bdev_gpt.so.6.0 00:03:56.451 CC module/bdev/raid/bdev_raid.o 00:03:56.451 SYMLINK libspdk_bdev_error.so 00:03:56.451 CC module/bdev/raid/bdev_raid_rpc.o 00:03:56.451 SYMLINK libspdk_bdev_malloc.so 00:03:56.451 CC module/bdev/raid/bdev_raid_sb.o 00:03:56.451 LIB libspdk_bdev_delay.a 00:03:56.451 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:56.451 SYMLINK libspdk_bdev_gpt.so 00:03:56.451 CC module/bdev/raid/raid0.o 00:03:56.451 SO libspdk_bdev_delay.so.6.0 00:03:56.451 CC module/bdev/split/vbdev_split.o 00:03:56.451 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:56.451 SYMLINK libspdk_bdev_delay.so 00:03:56.451 CC module/bdev/split/vbdev_split_rpc.o 00:03:56.709 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:56.709 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:56.709 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:56.709 CC module/bdev/raid/raid1.o 00:03:56.709 LIB libspdk_bdev_split.a 00:03:56.709 SO libspdk_bdev_split.so.6.0 00:03:56.709 LIB libspdk_bdev_passthru.a 00:03:56.968 SO libspdk_bdev_passthru.so.6.0 00:03:56.968 SYMLINK libspdk_bdev_split.so 00:03:56.968 CC module/bdev/raid/concat.o 00:03:56.968 LIB libspdk_bdev_lvol.a 00:03:56.968 CC module/bdev/xnvme/bdev_xnvme.o 00:03:56.968 LIB libspdk_bdev_zone_block.a 00:03:56.968 SO libspdk_bdev_lvol.so.6.0 00:03:56.968 SO libspdk_bdev_zone_block.so.6.0 00:03:56.968 SYMLINK libspdk_bdev_passthru.so 00:03:56.968 SYMLINK libspdk_bdev_lvol.so 00:03:56.968 CC module/bdev/nvme/nvme_rpc.o 00:03:56.968 CC module/bdev/nvme/bdev_mdns_client.o 00:03:56.968 CC module/bdev/aio/bdev_aio.o 00:03:56.968 SYMLINK libspdk_bdev_zone_block.so 00:03:56.968 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:57.226 CC module/bdev/nvme/vbdev_opal.o 00:03:57.226 CC module/bdev/aio/bdev_aio_rpc.o 00:03:57.226 LIB libspdk_bdev_xnvme.a 00:03:57.226 SO libspdk_bdev_xnvme.so.3.0 00:03:57.226 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:57.226 CC module/bdev/ftl/bdev_ftl.o 00:03:57.226 CC module/bdev/iscsi/bdev_iscsi.o 00:03:57.484 SYMLINK libspdk_bdev_xnvme.so 00:03:57.484 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:57.484 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:57.484 LIB libspdk_bdev_aio.a 00:03:57.484 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:57.484 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:57.484 SO libspdk_bdev_aio.so.6.0 00:03:57.484 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:57.743 LIB libspdk_bdev_raid.a 00:03:57.743 SYMLINK libspdk_bdev_aio.so 00:03:57.743 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:57.743 LIB libspdk_bdev_ftl.a 00:03:57.743 SO libspdk_bdev_raid.so.6.0 00:03:57.743 SO libspdk_bdev_ftl.so.6.0 00:03:57.743 LIB libspdk_bdev_iscsi.a 00:03:57.743 SYMLINK libspdk_bdev_ftl.so 00:03:57.743 SYMLINK libspdk_bdev_raid.so 00:03:57.743 SO libspdk_bdev_iscsi.so.6.0 00:03:58.001 SYMLINK libspdk_bdev_iscsi.so 00:03:58.259 LIB libspdk_bdev_virtio.a 00:03:58.259 SO libspdk_bdev_virtio.so.6.0 00:03:58.259 SYMLINK libspdk_bdev_virtio.so 00:03:59.193 LIB libspdk_bdev_nvme.a 00:03:59.193 SO libspdk_bdev_nvme.so.7.0 00:03:59.451 SYMLINK libspdk_bdev_nvme.so 00:04:00.016 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:00.016 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:00.016 CC module/event/subsystems/iobuf/iobuf.o 00:04:00.016 CC module/event/subsystems/sock/sock.o 00:04:00.016 CC module/event/subsystems/scheduler/scheduler.o 00:04:00.016 CC module/event/subsystems/vmd/vmd.o 00:04:00.016 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:00.016 CC module/event/subsystems/keyring/keyring.o 00:04:00.016 LIB libspdk_event_keyring.a 00:04:00.016 LIB libspdk_event_scheduler.a 00:04:00.016 LIB libspdk_event_sock.a 00:04:00.016 LIB libspdk_event_vhost_blk.a 00:04:00.016 LIB libspdk_event_iobuf.a 00:04:00.016 SO libspdk_event_vhost_blk.so.3.0 00:04:00.016 SO libspdk_event_scheduler.so.4.0 00:04:00.016 SO libspdk_event_keyring.so.1.0 00:04:00.016 SO libspdk_event_sock.so.5.0 00:04:00.016 LIB libspdk_event_vmd.a 00:04:00.273 SO libspdk_event_vmd.so.6.0 00:04:00.274 SO libspdk_event_iobuf.so.3.0 00:04:00.274 SYMLINK libspdk_event_vhost_blk.so 00:04:00.274 SYMLINK libspdk_event_scheduler.so 00:04:00.274 SYMLINK libspdk_event_sock.so 00:04:00.274 SYMLINK libspdk_event_keyring.so 00:04:00.274 SYMLINK libspdk_event_iobuf.so 00:04:00.274 SYMLINK libspdk_event_vmd.so 00:04:00.531 CC module/event/subsystems/accel/accel.o 00:04:00.789 LIB libspdk_event_accel.a 00:04:00.789 SO libspdk_event_accel.so.6.0 00:04:00.789 SYMLINK libspdk_event_accel.so 00:04:01.047 CC module/event/subsystems/bdev/bdev.o 00:04:01.306 LIB libspdk_event_bdev.a 00:04:01.306 SO libspdk_event_bdev.so.6.0 00:04:01.306 SYMLINK libspdk_event_bdev.so 00:04:01.564 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:01.564 CC module/event/subsystems/nbd/nbd.o 00:04:01.564 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:01.564 CC module/event/subsystems/scsi/scsi.o 00:04:01.564 CC module/event/subsystems/ublk/ublk.o 00:04:01.822 LIB libspdk_event_nbd.a 00:04:01.823 LIB libspdk_event_ublk.a 00:04:01.823 LIB libspdk_event_scsi.a 00:04:01.823 SO libspdk_event_nbd.so.6.0 00:04:01.823 SO libspdk_event_ublk.so.3.0 00:04:01.823 SO libspdk_event_scsi.so.6.0 00:04:01.823 SYMLINK libspdk_event_nbd.so 00:04:01.823 SYMLINK libspdk_event_ublk.so 00:04:01.823 LIB libspdk_event_nvmf.a 00:04:01.823 SYMLINK libspdk_event_scsi.so 00:04:01.823 SO libspdk_event_nvmf.so.6.0 00:04:02.081 SYMLINK libspdk_event_nvmf.so 00:04:02.081 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:02.081 CC module/event/subsystems/iscsi/iscsi.o 00:04:02.339 LIB libspdk_event_vhost_scsi.a 00:04:02.339 SO libspdk_event_vhost_scsi.so.3.0 00:04:02.339 LIB libspdk_event_iscsi.a 00:04:02.339 SO libspdk_event_iscsi.so.6.0 00:04:02.339 SYMLINK libspdk_event_vhost_scsi.so 00:04:02.339 SYMLINK libspdk_event_iscsi.so 00:04:02.598 SO libspdk.so.6.0 00:04:02.598 SYMLINK libspdk.so 00:04:02.857 TEST_HEADER include/spdk/accel.h 00:04:02.857 TEST_HEADER include/spdk/accel_module.h 00:04:02.857 TEST_HEADER include/spdk/assert.h 00:04:02.857 TEST_HEADER include/spdk/barrier.h 00:04:02.857 TEST_HEADER include/spdk/base64.h 00:04:02.857 CC test/rpc_client/rpc_client_test.o 00:04:02.857 TEST_HEADER include/spdk/bdev.h 00:04:02.857 TEST_HEADER include/spdk/bdev_module.h 00:04:02.857 TEST_HEADER include/spdk/bdev_zone.h 00:04:02.857 CXX app/trace/trace.o 00:04:02.857 TEST_HEADER include/spdk/bit_array.h 00:04:02.857 TEST_HEADER include/spdk/bit_pool.h 00:04:02.857 TEST_HEADER include/spdk/blob_bdev.h 00:04:02.857 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:02.857 TEST_HEADER include/spdk/blobfs.h 00:04:02.857 TEST_HEADER include/spdk/blob.h 00:04:02.857 TEST_HEADER include/spdk/conf.h 00:04:02.857 CC app/trace_record/trace_record.o 00:04:02.857 TEST_HEADER include/spdk/config.h 00:04:02.857 TEST_HEADER include/spdk/cpuset.h 00:04:02.857 TEST_HEADER include/spdk/crc16.h 00:04:02.857 TEST_HEADER include/spdk/crc32.h 00:04:02.857 TEST_HEADER include/spdk/crc64.h 00:04:02.857 TEST_HEADER include/spdk/dif.h 00:04:02.857 TEST_HEADER include/spdk/dma.h 00:04:02.857 TEST_HEADER include/spdk/endian.h 00:04:02.857 TEST_HEADER include/spdk/env_dpdk.h 00:04:02.857 TEST_HEADER include/spdk/env.h 00:04:02.857 CC app/nvmf_tgt/nvmf_main.o 00:04:02.857 TEST_HEADER include/spdk/event.h 00:04:02.857 TEST_HEADER include/spdk/fd_group.h 00:04:02.857 TEST_HEADER include/spdk/fd.h 00:04:02.857 TEST_HEADER include/spdk/file.h 00:04:02.857 TEST_HEADER include/spdk/ftl.h 00:04:02.857 TEST_HEADER include/spdk/gpt_spec.h 00:04:02.857 TEST_HEADER include/spdk/hexlify.h 00:04:02.857 TEST_HEADER include/spdk/histogram_data.h 00:04:02.857 TEST_HEADER include/spdk/idxd.h 00:04:02.857 TEST_HEADER include/spdk/idxd_spec.h 00:04:02.857 TEST_HEADER include/spdk/init.h 00:04:02.857 TEST_HEADER include/spdk/ioat.h 00:04:02.857 TEST_HEADER include/spdk/ioat_spec.h 00:04:02.857 TEST_HEADER include/spdk/iscsi_spec.h 00:04:02.857 TEST_HEADER include/spdk/json.h 00:04:02.857 TEST_HEADER include/spdk/jsonrpc.h 00:04:02.857 TEST_HEADER include/spdk/keyring.h 00:04:02.857 TEST_HEADER include/spdk/keyring_module.h 00:04:02.857 TEST_HEADER include/spdk/likely.h 00:04:02.857 TEST_HEADER include/spdk/log.h 00:04:02.857 TEST_HEADER include/spdk/lvol.h 00:04:02.857 CC examples/util/zipf/zipf.o 00:04:02.857 TEST_HEADER include/spdk/memory.h 00:04:02.857 TEST_HEADER include/spdk/mmio.h 00:04:02.857 TEST_HEADER include/spdk/nbd.h 00:04:02.857 TEST_HEADER include/spdk/notify.h 00:04:02.857 TEST_HEADER include/spdk/nvme.h 00:04:02.857 CC test/thread/poller_perf/poller_perf.o 00:04:02.857 TEST_HEADER include/spdk/nvme_intel.h 00:04:02.857 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:02.857 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:02.857 TEST_HEADER include/spdk/nvme_spec.h 00:04:02.857 TEST_HEADER include/spdk/nvme_zns.h 00:04:02.857 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:02.857 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:02.857 TEST_HEADER include/spdk/nvmf.h 00:04:02.857 TEST_HEADER include/spdk/nvmf_spec.h 00:04:02.857 TEST_HEADER include/spdk/nvmf_transport.h 00:04:03.115 TEST_HEADER include/spdk/opal.h 00:04:03.115 TEST_HEADER include/spdk/opal_spec.h 00:04:03.115 CC test/app/bdev_svc/bdev_svc.o 00:04:03.115 TEST_HEADER include/spdk/pci_ids.h 00:04:03.115 TEST_HEADER include/spdk/pipe.h 00:04:03.115 TEST_HEADER include/spdk/queue.h 00:04:03.115 CC test/dma/test_dma/test_dma.o 00:04:03.116 TEST_HEADER include/spdk/reduce.h 00:04:03.116 TEST_HEADER include/spdk/rpc.h 00:04:03.116 TEST_HEADER include/spdk/scheduler.h 00:04:03.116 TEST_HEADER include/spdk/scsi.h 00:04:03.116 TEST_HEADER include/spdk/scsi_spec.h 00:04:03.116 TEST_HEADER include/spdk/sock.h 00:04:03.116 TEST_HEADER include/spdk/stdinc.h 00:04:03.116 TEST_HEADER include/spdk/string.h 00:04:03.116 TEST_HEADER include/spdk/thread.h 00:04:03.116 TEST_HEADER include/spdk/trace.h 00:04:03.116 TEST_HEADER include/spdk/trace_parser.h 00:04:03.116 TEST_HEADER include/spdk/tree.h 00:04:03.116 TEST_HEADER include/spdk/ublk.h 00:04:03.116 TEST_HEADER include/spdk/util.h 00:04:03.116 TEST_HEADER include/spdk/uuid.h 00:04:03.116 TEST_HEADER include/spdk/version.h 00:04:03.116 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:03.116 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:03.116 TEST_HEADER include/spdk/vhost.h 00:04:03.116 TEST_HEADER include/spdk/vmd.h 00:04:03.116 TEST_HEADER include/spdk/xor.h 00:04:03.116 CC test/env/mem_callbacks/mem_callbacks.o 00:04:03.116 TEST_HEADER include/spdk/zipf.h 00:04:03.116 CXX test/cpp_headers/accel.o 00:04:03.116 LINK rpc_client_test 00:04:03.116 LINK nvmf_tgt 00:04:03.116 LINK zipf 00:04:03.116 LINK poller_perf 00:04:03.116 LINK bdev_svc 00:04:03.116 LINK spdk_trace_record 00:04:03.116 CXX test/cpp_headers/accel_module.o 00:04:03.374 LINK mem_callbacks 00:04:03.374 LINK spdk_trace 00:04:03.374 CXX test/cpp_headers/assert.o 00:04:03.374 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:03.374 CC examples/ioat/perf/perf.o 00:04:03.374 CC test/env/vtophys/vtophys.o 00:04:03.630 LINK test_dma 00:04:03.630 CXX test/cpp_headers/barrier.o 00:04:03.630 CC test/event/event_perf/event_perf.o 00:04:03.630 CC examples/sock/hello_world/hello_sock.o 00:04:03.630 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:03.630 CC examples/thread/thread/thread_ex.o 00:04:03.630 CC app/iscsi_tgt/iscsi_tgt.o 00:04:03.630 LINK vtophys 00:04:03.630 LINK interrupt_tgt 00:04:03.630 CXX test/cpp_headers/base64.o 00:04:03.630 LINK event_perf 00:04:03.630 LINK ioat_perf 00:04:03.887 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:03.887 CXX test/cpp_headers/bdev.o 00:04:03.887 LINK iscsi_tgt 00:04:03.887 LINK hello_sock 00:04:03.887 CXX test/cpp_headers/bdev_module.o 00:04:03.887 LINK thread 00:04:03.887 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:03.887 CC examples/ioat/verify/verify.o 00:04:03.887 CC test/event/reactor/reactor.o 00:04:04.143 LINK nvme_fuzz 00:04:04.143 CXX test/cpp_headers/bdev_zone.o 00:04:04.143 LINK env_dpdk_post_init 00:04:04.143 LINK reactor 00:04:04.401 CC app/spdk_tgt/spdk_tgt.o 00:04:04.401 LINK verify 00:04:04.401 CXX test/cpp_headers/bit_array.o 00:04:04.401 CC test/accel/dif/dif.o 00:04:04.401 CC test/blobfs/mkfs/mkfs.o 00:04:04.401 CC test/lvol/esnap/esnap.o 00:04:04.401 CC test/event/reactor_perf/reactor_perf.o 00:04:04.401 CC test/env/memory/memory_ut.o 00:04:04.659 CXX test/cpp_headers/bit_pool.o 00:04:04.659 LINK spdk_tgt 00:04:04.659 CC test/nvme/aer/aer.o 00:04:04.659 LINK reactor_perf 00:04:04.659 LINK mkfs 00:04:04.659 CC examples/vmd/lsvmd/lsvmd.o 00:04:04.659 CXX test/cpp_headers/blob_bdev.o 00:04:04.917 LINK lsvmd 00:04:04.917 CC test/event/app_repeat/app_repeat.o 00:04:04.917 CC app/spdk_lspci/spdk_lspci.o 00:04:04.917 LINK dif 00:04:04.917 CC app/spdk_nvme_perf/perf.o 00:04:04.917 CXX test/cpp_headers/blobfs_bdev.o 00:04:04.917 LINK aer 00:04:05.176 CC examples/vmd/led/led.o 00:04:05.176 LINK spdk_lspci 00:04:05.176 LINK app_repeat 00:04:05.176 CXX test/cpp_headers/blobfs.o 00:04:05.176 CXX test/cpp_headers/blob.o 00:04:05.176 LINK led 00:04:05.176 CXX test/cpp_headers/conf.o 00:04:05.176 CC test/nvme/reset/reset.o 00:04:05.434 LINK memory_ut 00:04:05.434 CC test/event/scheduler/scheduler.o 00:04:05.434 CXX test/cpp_headers/config.o 00:04:05.434 CC test/app/histogram_perf/histogram_perf.o 00:04:05.434 CXX test/cpp_headers/cpuset.o 00:04:05.692 LINK reset 00:04:05.692 CC test/bdev/bdevio/bdevio.o 00:04:05.692 CC examples/idxd/perf/perf.o 00:04:05.692 LINK histogram_perf 00:04:05.692 CXX test/cpp_headers/crc16.o 00:04:05.692 CC test/env/pci/pci_ut.o 00:04:05.692 LINK scheduler 00:04:05.692 CXX test/cpp_headers/crc32.o 00:04:05.950 CC test/nvme/sgl/sgl.o 00:04:05.950 CC app/spdk_nvme_identify/identify.o 00:04:05.950 CXX test/cpp_headers/crc64.o 00:04:05.950 LINK spdk_nvme_perf 00:04:05.950 CC app/spdk_nvme_discover/discovery_aer.o 00:04:05.950 LINK iscsi_fuzz 00:04:05.950 LINK idxd_perf 00:04:05.950 LINK bdevio 00:04:06.208 CXX test/cpp_headers/dif.o 00:04:06.208 LINK pci_ut 00:04:06.208 LINK sgl 00:04:06.208 LINK spdk_nvme_discover 00:04:06.466 CXX test/cpp_headers/dma.o 00:04:06.466 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:06.466 CC examples/accel/perf/accel_perf.o 00:04:06.466 CC test/app/jsoncat/jsoncat.o 00:04:06.466 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:06.466 CC examples/blob/hello_world/hello_blob.o 00:04:06.466 CC test/nvme/e2edp/nvme_dp.o 00:04:06.466 CXX test/cpp_headers/endian.o 00:04:06.466 LINK jsoncat 00:04:06.724 CC examples/blob/cli/blobcli.o 00:04:06.724 LINK hello_blob 00:04:06.724 CC examples/nvme/hello_world/hello_world.o 00:04:06.724 CXX test/cpp_headers/env_dpdk.o 00:04:06.724 LINK nvme_dp 00:04:06.724 CC test/nvme/overhead/overhead.o 00:04:06.983 CXX test/cpp_headers/env.o 00:04:06.983 LINK vhost_fuzz 00:04:06.983 LINK accel_perf 00:04:06.983 LINK hello_world 00:04:06.983 CC app/spdk_top/spdk_top.o 00:04:06.983 LINK spdk_nvme_identify 00:04:06.983 CC test/nvme/err_injection/err_injection.o 00:04:07.241 CXX test/cpp_headers/event.o 00:04:07.241 LINK overhead 00:04:07.241 CC test/app/stub/stub.o 00:04:07.241 LINK blobcli 00:04:07.241 CC examples/nvme/reconnect/reconnect.o 00:04:07.241 CC test/nvme/startup/startup.o 00:04:07.241 LINK err_injection 00:04:07.241 CXX test/cpp_headers/fd_group.o 00:04:07.498 LINK stub 00:04:07.499 CC examples/bdev/hello_world/hello_bdev.o 00:04:07.499 CXX test/cpp_headers/fd.o 00:04:07.499 CXX test/cpp_headers/file.o 00:04:07.499 LINK startup 00:04:07.499 CC examples/bdev/bdevperf/bdevperf.o 00:04:07.499 CC test/nvme/reserve/reserve.o 00:04:07.756 CXX test/cpp_headers/ftl.o 00:04:07.756 LINK reconnect 00:04:07.756 CC test/nvme/simple_copy/simple_copy.o 00:04:07.756 LINK hello_bdev 00:04:07.756 CC test/nvme/connect_stress/connect_stress.o 00:04:07.756 CC test/nvme/boot_partition/boot_partition.o 00:04:07.756 LINK reserve 00:04:08.014 CXX test/cpp_headers/gpt_spec.o 00:04:08.014 LINK connect_stress 00:04:08.014 LINK boot_partition 00:04:08.014 LINK simple_copy 00:04:08.014 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:08.014 CC test/nvme/compliance/nvme_compliance.o 00:04:08.014 CXX test/cpp_headers/hexlify.o 00:04:08.271 CC app/vhost/vhost.o 00:04:08.271 LINK spdk_top 00:04:08.271 CXX test/cpp_headers/histogram_data.o 00:04:08.271 CC test/nvme/fused_ordering/fused_ordering.o 00:04:08.271 CC examples/nvme/hotplug/hotplug.o 00:04:08.271 CC examples/nvme/arbitration/arbitration.o 00:04:08.271 LINK vhost 00:04:08.529 CXX test/cpp_headers/idxd.o 00:04:08.529 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:08.529 LINK nvme_compliance 00:04:08.529 LINK bdevperf 00:04:08.529 LINK fused_ordering 00:04:08.529 LINK hotplug 00:04:08.529 CXX test/cpp_headers/idxd_spec.o 00:04:08.529 LINK nvme_manage 00:04:08.529 LINK cmb_copy 00:04:08.787 LINK arbitration 00:04:08.787 CXX test/cpp_headers/init.o 00:04:08.787 CC app/spdk_dd/spdk_dd.o 00:04:08.787 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:08.787 CC app/fio/nvme/fio_plugin.o 00:04:08.787 CC examples/nvme/abort/abort.o 00:04:08.787 CXX test/cpp_headers/ioat.o 00:04:09.046 CXX test/cpp_headers/ioat_spec.o 00:04:09.046 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:09.046 CC test/nvme/fdp/fdp.o 00:04:09.046 CC app/fio/bdev/fio_plugin.o 00:04:09.046 LINK doorbell_aers 00:04:09.046 CXX test/cpp_headers/iscsi_spec.o 00:04:09.046 LINK pmr_persistence 00:04:09.046 CC test/nvme/cuse/cuse.o 00:04:09.304 CXX test/cpp_headers/json.o 00:04:09.304 LINK spdk_dd 00:04:09.304 CXX test/cpp_headers/jsonrpc.o 00:04:09.304 CXX test/cpp_headers/keyring.o 00:04:09.304 LINK abort 00:04:09.304 CXX test/cpp_headers/keyring_module.o 00:04:09.304 LINK fdp 00:04:09.562 CXX test/cpp_headers/likely.o 00:04:09.562 CXX test/cpp_headers/log.o 00:04:09.562 CXX test/cpp_headers/lvol.o 00:04:09.562 CXX test/cpp_headers/memory.o 00:04:09.562 LINK spdk_bdev 00:04:09.562 CXX test/cpp_headers/mmio.o 00:04:09.562 LINK spdk_nvme 00:04:09.562 CXX test/cpp_headers/nbd.o 00:04:09.562 CXX test/cpp_headers/notify.o 00:04:09.562 CXX test/cpp_headers/nvme.o 00:04:09.562 CXX test/cpp_headers/nvme_intel.o 00:04:09.562 CXX test/cpp_headers/nvme_ocssd.o 00:04:09.562 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:09.819 CXX test/cpp_headers/nvme_spec.o 00:04:09.819 CXX test/cpp_headers/nvme_zns.o 00:04:09.819 CC examples/nvmf/nvmf/nvmf.o 00:04:09.819 CXX test/cpp_headers/nvmf_cmd.o 00:04:09.819 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:09.819 CXX test/cpp_headers/nvmf.o 00:04:09.819 CXX test/cpp_headers/nvmf_spec.o 00:04:09.819 CXX test/cpp_headers/nvmf_transport.o 00:04:09.819 CXX test/cpp_headers/opal.o 00:04:09.819 CXX test/cpp_headers/opal_spec.o 00:04:10.077 CXX test/cpp_headers/pci_ids.o 00:04:10.077 CXX test/cpp_headers/pipe.o 00:04:10.077 CXX test/cpp_headers/queue.o 00:04:10.077 CXX test/cpp_headers/reduce.o 00:04:10.077 CXX test/cpp_headers/rpc.o 00:04:10.077 LINK nvmf 00:04:10.077 CXX test/cpp_headers/scheduler.o 00:04:10.077 CXX test/cpp_headers/scsi.o 00:04:10.077 CXX test/cpp_headers/scsi_spec.o 00:04:10.335 CXX test/cpp_headers/sock.o 00:04:10.335 CXX test/cpp_headers/stdinc.o 00:04:10.335 CXX test/cpp_headers/string.o 00:04:10.335 CXX test/cpp_headers/thread.o 00:04:10.335 CXX test/cpp_headers/trace.o 00:04:10.335 CXX test/cpp_headers/trace_parser.o 00:04:10.335 CXX test/cpp_headers/tree.o 00:04:10.335 CXX test/cpp_headers/ublk.o 00:04:10.335 CXX test/cpp_headers/util.o 00:04:10.335 CXX test/cpp_headers/uuid.o 00:04:10.335 CXX test/cpp_headers/version.o 00:04:10.335 CXX test/cpp_headers/vfio_user_pci.o 00:04:10.335 CXX test/cpp_headers/vfio_user_spec.o 00:04:10.594 CXX test/cpp_headers/vhost.o 00:04:10.594 CXX test/cpp_headers/vmd.o 00:04:10.594 CXX test/cpp_headers/xor.o 00:04:10.594 CXX test/cpp_headers/zipf.o 00:04:10.594 LINK cuse 00:04:11.160 LINK esnap 00:04:11.734 00:04:11.734 real 1m0.618s 00:04:11.734 user 5m48.173s 00:04:11.734 sys 1m6.941s 00:04:11.734 05:51:03 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:11.734 05:51:03 make -- common/autotest_common.sh@10 -- $ set +x 00:04:11.734 ************************************ 00:04:11.734 END TEST make 00:04:11.734 ************************************ 00:04:11.734 05:51:03 -- common/autotest_common.sh@1142 -- $ return 0 00:04:11.734 05:51:03 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:11.734 05:51:03 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:11.734 05:51:03 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:11.734 05:51:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:11.734 05:51:03 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:11.734 05:51:03 -- pm/common@44 -- $ pid=5975 00:04:11.734 05:51:03 -- pm/common@50 -- $ kill -TERM 5975 00:04:11.734 05:51:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:11.734 05:51:03 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:11.734 05:51:03 -- pm/common@44 -- $ pid=5977 00:04:11.734 05:51:03 -- pm/common@50 -- $ kill -TERM 5977 00:04:11.734 05:51:03 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:11.734 05:51:03 -- nvmf/common.sh@7 -- # uname -s 00:04:11.734 05:51:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:11.734 05:51:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:11.734 05:51:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:11.734 05:51:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:11.734 05:51:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:11.734 05:51:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:11.734 05:51:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:11.734 05:51:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:11.734 05:51:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:11.734 05:51:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:11.734 05:51:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a816761f-03ef-42fc-91d8-b7286b6eff78 00:04:11.734 05:51:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=a816761f-03ef-42fc-91d8-b7286b6eff78 00:04:11.734 05:51:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:11.734 05:51:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:11.734 05:51:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:11.734 05:51:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:11.734 05:51:03 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:11.734 05:51:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:11.734 05:51:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:11.734 05:51:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:11.734 05:51:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.734 05:51:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.734 05:51:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.734 05:51:03 -- paths/export.sh@5 -- # export PATH 00:04:11.734 05:51:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.734 05:51:03 -- nvmf/common.sh@47 -- # : 0 00:04:11.734 05:51:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:11.734 05:51:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:11.734 05:51:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:11.734 05:51:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:11.734 05:51:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:11.734 05:51:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:11.734 05:51:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:11.734 05:51:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:11.734 05:51:03 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:11.734 05:51:03 -- spdk/autotest.sh@32 -- # uname -s 00:04:11.734 05:51:03 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:11.734 05:51:03 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:11.734 05:51:03 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:11.734 05:51:03 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:11.734 05:51:03 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:11.734 05:51:03 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:12.017 05:51:03 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:12.017 05:51:03 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:12.017 05:51:03 -- spdk/autotest.sh@48 -- # udevadm_pid=65790 00:04:12.017 05:51:03 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:12.017 05:51:03 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:12.017 05:51:03 -- pm/common@17 -- # local monitor 00:04:12.017 05:51:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:12.017 05:51:03 -- pm/common@21 -- # date +%s 00:04:12.017 05:51:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:12.017 05:51:03 -- pm/common@25 -- # sleep 1 00:04:12.017 05:51:03 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720849863 00:04:12.017 05:51:03 -- pm/common@21 -- # date +%s 00:04:12.017 05:51:03 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720849863 00:04:12.017 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720849863_collect-vmstat.pm.log 00:04:12.017 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720849863_collect-cpu-load.pm.log 00:04:12.959 05:51:04 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:12.959 05:51:04 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:12.959 05:51:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:12.959 05:51:04 -- common/autotest_common.sh@10 -- # set +x 00:04:12.959 05:51:04 -- spdk/autotest.sh@59 -- # create_test_list 00:04:12.959 05:51:04 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:12.959 05:51:04 -- common/autotest_common.sh@10 -- # set +x 00:04:12.959 05:51:04 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:12.959 05:51:04 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:12.959 05:51:04 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:12.959 05:51:04 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:12.959 05:51:04 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:12.959 05:51:04 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:12.959 05:51:04 -- common/autotest_common.sh@1455 -- # uname 00:04:12.959 05:51:04 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:12.959 05:51:04 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:12.959 05:51:04 -- common/autotest_common.sh@1475 -- # uname 00:04:12.959 05:51:04 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:12.959 05:51:04 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:12.959 05:51:04 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:12.959 05:51:04 -- spdk/autotest.sh@72 -- # hash lcov 00:04:12.959 05:51:04 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:12.959 05:51:04 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:12.959 --rc lcov_branch_coverage=1 00:04:12.959 --rc lcov_function_coverage=1 00:04:12.959 --rc genhtml_branch_coverage=1 00:04:12.959 --rc genhtml_function_coverage=1 00:04:12.959 --rc genhtml_legend=1 00:04:12.959 --rc geninfo_all_blocks=1 00:04:12.959 ' 00:04:12.959 05:51:04 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:12.959 --rc lcov_branch_coverage=1 00:04:12.959 --rc lcov_function_coverage=1 00:04:12.959 --rc genhtml_branch_coverage=1 00:04:12.959 --rc genhtml_function_coverage=1 00:04:12.959 --rc genhtml_legend=1 00:04:12.959 --rc geninfo_all_blocks=1 00:04:12.959 ' 00:04:12.959 05:51:04 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:12.959 --rc lcov_branch_coverage=1 00:04:12.959 --rc lcov_function_coverage=1 00:04:12.959 --rc genhtml_branch_coverage=1 00:04:12.959 --rc genhtml_function_coverage=1 00:04:12.959 --rc genhtml_legend=1 00:04:12.959 --rc geninfo_all_blocks=1 00:04:12.959 --no-external' 00:04:12.959 05:51:04 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:12.959 --rc lcov_branch_coverage=1 00:04:12.959 --rc lcov_function_coverage=1 00:04:12.959 --rc genhtml_branch_coverage=1 00:04:12.959 --rc genhtml_function_coverage=1 00:04:12.959 --rc genhtml_legend=1 00:04:12.959 --rc geninfo_all_blocks=1 00:04:12.959 --no-external' 00:04:12.959 05:51:04 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:12.959 lcov: LCOV version 1.14 00:04:12.959 05:51:04 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:27.833 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:27.833 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:40.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:40.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:40.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:40.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:44.230 05:51:35 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:44.230 05:51:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:44.230 05:51:35 -- common/autotest_common.sh@10 -- # set +x 00:04:44.230 05:51:35 -- spdk/autotest.sh@91 -- # rm -f 00:04:44.230 05:51:35 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:44.230 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:44.489 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:44.489 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:44.489 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:44.489 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:44.747 05:51:36 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:44.747 05:51:36 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:44.747 05:51:36 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:44.747 05:51:36 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:44.748 05:51:36 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:44.748 05:51:36 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:44.748 05:51:36 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:44.748 05:51:36 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:44.748 05:51:36 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:44.748 05:51:36 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:44.748 05:51:36 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:44.748 05:51:36 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:44.748 05:51:36 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:44.748 05:51:36 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:44.748 05:51:36 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:44.748 05:51:36 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:04:44.748 05:51:36 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:04:44.748 05:51:36 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:44.748 05:51:36 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:44.748 05:51:36 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:44.748 05:51:36 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:04:44.748 05:51:36 -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:04:44.748 05:51:36 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:44.748 05:51:36 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:44.748 05:51:36 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:44.748 05:51:36 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:04:44.748 05:51:36 -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:04:44.748 05:51:36 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:44.748 05:51:36 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:44.748 05:51:36 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:44.748 05:51:36 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:04:44.748 05:51:36 -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:04:44.748 05:51:36 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:44.748 05:51:36 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:44.748 05:51:36 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:44.748 05:51:36 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:04:44.748 05:51:36 -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:04:44.748 05:51:36 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:44.748 05:51:36 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:44.748 05:51:36 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:44.748 05:51:36 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:44.748 05:51:36 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:44.748 05:51:36 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:44.748 05:51:36 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:44.748 05:51:36 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:44.748 No valid GPT data, bailing 00:04:44.748 05:51:36 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:44.748 05:51:36 -- scripts/common.sh@391 -- # pt= 00:04:44.748 05:51:36 -- scripts/common.sh@392 -- # return 1 00:04:44.748 05:51:36 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:44.748 1+0 records in 00:04:44.748 1+0 records out 00:04:44.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127768 s, 82.1 MB/s 00:04:44.748 05:51:36 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:44.748 05:51:36 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:44.748 05:51:36 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:44.748 05:51:36 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:44.748 05:51:36 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:44.748 No valid GPT data, bailing 00:04:44.748 05:51:36 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:44.748 05:51:36 -- scripts/common.sh@391 -- # pt= 00:04:44.748 05:51:36 -- scripts/common.sh@392 -- # return 1 00:04:44.748 05:51:36 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:44.748 1+0 records in 00:04:44.748 1+0 records out 00:04:44.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00436828 s, 240 MB/s 00:04:44.748 05:51:36 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:44.748 05:51:36 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:44.748 05:51:36 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:04:44.748 05:51:36 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:04:44.748 05:51:36 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:44.748 No valid GPT data, bailing 00:04:44.748 05:51:36 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:45.008 05:51:36 -- scripts/common.sh@391 -- # pt= 00:04:45.008 05:51:36 -- scripts/common.sh@392 -- # return 1 00:04:45.008 05:51:36 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:45.008 1+0 records in 00:04:45.008 1+0 records out 00:04:45.008 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00371758 s, 282 MB/s 00:04:45.008 05:51:36 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:45.008 05:51:36 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:45.008 05:51:36 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:04:45.008 05:51:36 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:04:45.008 05:51:36 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:45.008 No valid GPT data, bailing 00:04:45.008 05:51:36 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:45.008 05:51:36 -- scripts/common.sh@391 -- # pt= 00:04:45.008 05:51:36 -- scripts/common.sh@392 -- # return 1 00:04:45.008 05:51:36 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:45.008 1+0 records in 00:04:45.008 1+0 records out 00:04:45.008 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00488201 s, 215 MB/s 00:04:45.008 05:51:36 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:45.008 05:51:36 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:45.008 05:51:36 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:04:45.008 05:51:36 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:04:45.008 05:51:36 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:45.008 No valid GPT data, bailing 00:04:45.008 05:51:36 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:45.008 05:51:36 -- scripts/common.sh@391 -- # pt= 00:04:45.008 05:51:36 -- scripts/common.sh@392 -- # return 1 00:04:45.008 05:51:36 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:45.008 1+0 records in 00:04:45.008 1+0 records out 00:04:45.008 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00372626 s, 281 MB/s 00:04:45.008 05:51:36 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:45.008 05:51:36 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:45.008 05:51:36 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:04:45.008 05:51:36 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:04:45.008 05:51:36 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:45.008 No valid GPT data, bailing 00:04:45.008 05:51:36 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:45.008 05:51:36 -- scripts/common.sh@391 -- # pt= 00:04:45.008 05:51:36 -- scripts/common.sh@392 -- # return 1 00:04:45.008 05:51:36 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:45.008 1+0 records in 00:04:45.008 1+0 records out 00:04:45.008 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00412007 s, 255 MB/s 00:04:45.008 05:51:36 -- spdk/autotest.sh@118 -- # sync 00:04:45.268 05:51:36 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:45.268 05:51:36 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:45.268 05:51:36 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:47.174 05:51:38 -- spdk/autotest.sh@124 -- # uname -s 00:04:47.174 05:51:38 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:47.174 05:51:38 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:47.174 05:51:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.175 05:51:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.175 05:51:38 -- common/autotest_common.sh@10 -- # set +x 00:04:47.175 ************************************ 00:04:47.175 START TEST setup.sh 00:04:47.175 ************************************ 00:04:47.175 05:51:38 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:47.175 * Looking for test storage... 00:04:47.175 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:47.175 05:51:38 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:47.175 05:51:38 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:47.175 05:51:38 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:47.175 05:51:38 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.175 05:51:38 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.175 05:51:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:47.175 ************************************ 00:04:47.175 START TEST acl 00:04:47.175 ************************************ 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:47.175 * Looking for test storage... 00:04:47.175 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:47.175 05:51:38 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:47.175 05:51:38 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:47.175 05:51:38 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:47.175 05:51:38 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:47.175 05:51:38 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:47.175 05:51:38 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:47.175 05:51:38 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:47.175 05:51:38 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:47.175 05:51:38 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:48.112 05:51:39 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:48.112 05:51:39 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:48.112 05:51:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:48.112 05:51:39 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:48.112 05:51:39 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.112 05:51:39 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:48.680 05:51:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:48.680 05:51:40 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:48.680 05:51:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:49.248 Hugepages 00:04:49.248 node hugesize free / total 00:04:49.248 05:51:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:49.248 05:51:40 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:49.248 05:51:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:49.248 00:04:49.248 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:49.248 05:51:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:49.248 05:51:40 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:49.248 05:51:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:49.248 05:51:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:49.248 05:51:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:49.248 05:51:40 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:49.248 05:51:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:49.249 05:51:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:49.249 05:51:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:49.249 05:51:40 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:49.249 05:51:40 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:49.249 05:51:40 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:49.249 05:51:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:49.508 05:51:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:49.508 05:51:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:49.508 05:51:41 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:49.508 05:51:41 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:49.508 05:51:41 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:49.508 05:51:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:49.508 05:51:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:04:49.508 05:51:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:49.508 05:51:41 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:49.508 05:51:41 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:49.508 05:51:41 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:49.508 05:51:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:49.508 05:51:41 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:04:49.508 05:51:41 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:49.508 05:51:41 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:04:49.508 05:51:41 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:49.508 05:51:41 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:49.508 05:51:41 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:49.508 05:51:41 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:04:49.508 05:51:41 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:49.508 05:51:41 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.508 05:51:41 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.508 05:51:41 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:49.508 ************************************ 00:04:49.508 START TEST denied 00:04:49.508 ************************************ 00:04:49.508 05:51:41 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:49.508 05:51:41 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:49.508 05:51:41 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:49.508 05:51:41 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:49.508 05:51:41 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.508 05:51:41 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:50.886 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:50.886 05:51:42 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:50.886 05:51:42 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:50.886 05:51:42 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:50.886 05:51:42 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:50.886 05:51:42 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:50.886 05:51:42 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:50.886 05:51:42 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:50.886 05:51:42 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:50.886 05:51:42 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:50.886 05:51:42 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:57.448 00:04:57.448 real 0m7.132s 00:04:57.448 user 0m0.828s 00:04:57.448 sys 0m1.319s 00:04:57.448 05:51:48 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.448 05:51:48 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:57.448 ************************************ 00:04:57.448 END TEST denied 00:04:57.448 ************************************ 00:04:57.448 05:51:48 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:57.448 05:51:48 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:57.448 05:51:48 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.448 05:51:48 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.448 05:51:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:57.448 ************************************ 00:04:57.448 START TEST allowed 00:04:57.448 ************************************ 00:04:57.448 05:51:48 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:57.448 05:51:48 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:57.448 05:51:48 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:57.448 05:51:48 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:57.448 05:51:48 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.448 05:51:48 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:58.014 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:58.014 05:51:49 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:58.014 05:51:49 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:58.014 05:51:49 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:58.014 05:51:49 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:58.014 05:51:49 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:58.014 05:51:49 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:58.014 05:51:49 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:58.014 05:51:49 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:58.014 05:51:49 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:04:58.014 05:51:49 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:04:58.014 05:51:49 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:58.014 05:51:49 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:58.014 05:51:49 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:58.014 05:51:49 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:04:58.014 05:51:49 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:04:58.014 05:51:49 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:58.014 05:51:49 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:58.014 05:51:49 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:58.014 05:51:49 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:58.014 05:51:49 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:58.953 00:04:58.953 real 0m2.142s 00:04:58.953 user 0m0.985s 00:04:58.953 sys 0m1.153s 00:04:58.953 05:51:50 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.953 05:51:50 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:58.953 ************************************ 00:04:58.953 END TEST allowed 00:04:58.953 ************************************ 00:04:58.953 05:51:50 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:58.953 ************************************ 00:04:58.953 END TEST acl 00:04:58.953 ************************************ 00:04:58.953 00:04:58.953 real 0m11.919s 00:04:58.953 user 0m3.069s 00:04:58.953 sys 0m3.856s 00:04:58.953 05:51:50 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.953 05:51:50 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:58.953 05:51:50 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:58.953 05:51:50 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:58.953 05:51:50 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.953 05:51:50 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.953 05:51:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:58.953 ************************************ 00:04:58.953 START TEST hugepages 00:04:58.953 ************************************ 00:04:58.953 05:51:50 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:59.214 * Looking for test storage... 00:04:59.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:59.214 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:59.214 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:59.214 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:59.214 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 4679724 kB' 'MemAvailable: 7369340 kB' 'Buffers: 2436 kB' 'Cached: 2893752 kB' 'SwapCached: 0 kB' 'Active: 444472 kB' 'Inactive: 2553636 kB' 'Active(anon): 112432 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 103860 kB' 'Mapped: 49084 kB' 'Shmem: 10512 kB' 'KReclaimable: 82208 kB' 'Slab: 160952 kB' 'SReclaimable: 82208 kB' 'SUnreclaim: 78744 kB' 'KernelStack: 6448 kB' 'PageTables: 3924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412432 kB' 'Committed_AS: 326836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.264 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:59.265 05:51:50 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:59.265 05:51:50 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.265 05:51:50 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.265 05:51:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:59.265 ************************************ 00:04:59.265 START TEST default_setup 00:04:59.265 ************************************ 00:04:59.265 05:51:50 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:59.265 05:51:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:59.265 05:51:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:59.265 05:51:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:59.265 05:51:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:59.265 05:51:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:59.265 05:51:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:59.265 05:51:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:59.265 05:51:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:59.265 05:51:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:59.265 05:51:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:59.265 05:51:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:59.265 05:51:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:59.265 05:51:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:59.265 05:51:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:59.265 05:51:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:59.265 05:51:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:59.265 05:51:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:59.265 05:51:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:59.265 05:51:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:59.265 05:51:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:59.265 05:51:50 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.265 05:51:50 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:59.851 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:00.121 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:00.121 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:00.121 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:00.394 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:00.394 05:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:00.394 05:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:00.394 05:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:00.394 05:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:00.394 05:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:00.394 05:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:00.394 05:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:00.394 05:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:00.394 05:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:00.394 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:00.394 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 6800760 kB' 'MemAvailable: 9490144 kB' 'Buffers: 2436 kB' 'Cached: 2893748 kB' 'SwapCached: 0 kB' 'Active: 463232 kB' 'Inactive: 2553664 kB' 'Active(anon): 131192 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 122100 kB' 'Mapped: 48620 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 160200 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78512 kB' 'KernelStack: 6480 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 348988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.395 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 6800508 kB' 'MemAvailable: 9489892 kB' 'Buffers: 2436 kB' 'Cached: 2893748 kB' 'SwapCached: 0 kB' 'Active: 462828 kB' 'Inactive: 2553664 kB' 'Active(anon): 130788 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 121932 kB' 'Mapped: 48680 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 160208 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78520 kB' 'KernelStack: 6432 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 348988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.396 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.397 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.397 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.397 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 6800508 kB' 'MemAvailable: 9489892 kB' 'Buffers: 2436 kB' 'Cached: 2893748 kB' 'SwapCached: 0 kB' 'Active: 462400 kB' 'Inactive: 2553664 kB' 'Active(anon): 130360 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 121496 kB' 'Mapped: 48560 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 160208 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78520 kB' 'KernelStack: 6448 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 348988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.397 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:00.398 nr_hugepages=1024 00:05:00.398 resv_hugepages=0 00:05:00.398 surplus_hugepages=0 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:00.398 anon_hugepages=0 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 6800816 kB' 'MemAvailable: 9490200 kB' 'Buffers: 2436 kB' 'Cached: 2893748 kB' 'SwapCached: 0 kB' 'Active: 462572 kB' 'Inactive: 2553664 kB' 'Active(anon): 130532 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 121720 kB' 'Mapped: 48820 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 160216 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78528 kB' 'KernelStack: 6480 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 348620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.398 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 6800572 kB' 'MemUsed: 5441388 kB' 'SwapCached: 0 kB' 'Active: 462852 kB' 'Inactive: 2553668 kB' 'Active(anon): 130812 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553668 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'FilePages: 2896184 kB' 'Mapped: 48560 kB' 'AnonPages: 121720 kB' 'Shmem: 10476 kB' 'KernelStack: 6480 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81688 kB' 'Slab: 160200 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78512 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.399 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.400 node0=1024 expecting 1024 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:00.400 00:05:00.400 real 0m1.325s 00:05:00.400 user 0m0.612s 00:05:00.400 sys 0m0.693s 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.400 05:51:52 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:00.400 ************************************ 00:05:00.400 END TEST default_setup 00:05:00.400 ************************************ 00:05:00.658 05:51:52 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:00.658 05:51:52 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:00.658 05:51:52 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.658 05:51:52 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.658 05:51:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:00.658 ************************************ 00:05:00.658 START TEST per_node_1G_alloc 00:05:00.658 ************************************ 00:05:00.658 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:05:00.658 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:00.658 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:00.658 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:00.658 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:00.658 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:00.658 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:00.658 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:00.658 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:00.658 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:00.658 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:00.658 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:00.658 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:00.658 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:00.658 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:00.658 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:00.658 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:00.658 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:00.658 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:00.658 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:00.658 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:00.658 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:00.658 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:00.658 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:00.658 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.658 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:00.916 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:00.916 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:00.916 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:00.916 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:00.916 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:01.180 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:01.180 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:01.180 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:01.180 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:01.180 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:01.180 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:01.180 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:01.180 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:01.180 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:01.180 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:01.180 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:01.180 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:01.180 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.180 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.180 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.180 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.180 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.180 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.180 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.180 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.180 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.180 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7848764 kB' 'MemAvailable: 10538152 kB' 'Buffers: 2436 kB' 'Cached: 2893748 kB' 'SwapCached: 0 kB' 'Active: 463000 kB' 'Inactive: 2553668 kB' 'Active(anon): 130960 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553668 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 122092 kB' 'Mapped: 48572 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 160144 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78456 kB' 'KernelStack: 6456 kB' 'PageTables: 3888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 348988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.181 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7848516 kB' 'MemAvailable: 10537904 kB' 'Buffers: 2436 kB' 'Cached: 2893748 kB' 'SwapCached: 0 kB' 'Active: 462500 kB' 'Inactive: 2553668 kB' 'Active(anon): 130460 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553668 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 121576 kB' 'Mapped: 48560 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 160172 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78484 kB' 'KernelStack: 6464 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 348988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.182 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.183 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7849272 kB' 'MemAvailable: 10538660 kB' 'Buffers: 2436 kB' 'Cached: 2893748 kB' 'SwapCached: 0 kB' 'Active: 462836 kB' 'Inactive: 2553668 kB' 'Active(anon): 130796 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553668 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 121888 kB' 'Mapped: 48560 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 160172 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78484 kB' 'KernelStack: 6464 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 348988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.184 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.185 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:01.186 nr_hugepages=512 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:01.186 resv_hugepages=0 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:01.186 surplus_hugepages=0 00:05:01.186 anon_hugepages=0 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7849020 kB' 'MemAvailable: 10538408 kB' 'Buffers: 2436 kB' 'Cached: 2893748 kB' 'SwapCached: 0 kB' 'Active: 462696 kB' 'Inactive: 2553668 kB' 'Active(anon): 130656 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553668 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 121756 kB' 'Mapped: 48560 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 160172 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78484 kB' 'KernelStack: 6448 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 348988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.186 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.187 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7848772 kB' 'MemUsed: 4393188 kB' 'SwapCached: 0 kB' 'Active: 462784 kB' 'Inactive: 2553668 kB' 'Active(anon): 130744 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553668 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'FilePages: 2896184 kB' 'Mapped: 48560 kB' 'AnonPages: 121852 kB' 'Shmem: 10476 kB' 'KernelStack: 6448 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81688 kB' 'Slab: 160164 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78476 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.188 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:01.189 node0=512 expecting 512 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:01.189 00:05:01.189 real 0m0.676s 00:05:01.189 user 0m0.312s 00:05:01.189 sys 0m0.412s 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.189 05:51:52 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:01.189 ************************************ 00:05:01.189 END TEST per_node_1G_alloc 00:05:01.189 ************************************ 00:05:01.189 05:51:52 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:01.189 05:51:52 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:01.189 05:51:52 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.189 05:51:52 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.189 05:51:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:01.189 ************************************ 00:05:01.189 START TEST even_2G_alloc 00:05:01.189 ************************************ 00:05:01.189 05:51:52 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:05:01.189 05:51:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:01.189 05:51:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:01.189 05:51:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:01.189 05:51:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:01.189 05:51:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:01.189 05:51:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:01.189 05:51:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:01.189 05:51:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:01.189 05:51:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:01.189 05:51:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:01.189 05:51:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:01.189 05:51:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:01.189 05:51:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:01.189 05:51:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:01.189 05:51:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:01.189 05:51:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:01.189 05:51:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:01.189 05:51:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:01.189 05:51:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:01.189 05:51:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:01.189 05:51:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:01.189 05:51:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:01.189 05:51:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.189 05:51:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:01.760 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:01.760 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:01.760 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:01.760 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:01.760 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:01.760 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:01.760 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:01.760 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:01.760 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:01.760 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:01.760 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 6797384 kB' 'MemAvailable: 9486772 kB' 'Buffers: 2436 kB' 'Cached: 2893748 kB' 'SwapCached: 0 kB' 'Active: 463304 kB' 'Inactive: 2553668 kB' 'Active(anon): 131264 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553668 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 122612 kB' 'Mapped: 48636 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 160168 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78480 kB' 'KernelStack: 6440 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.761 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 6797384 kB' 'MemAvailable: 9486772 kB' 'Buffers: 2436 kB' 'Cached: 2893748 kB' 'SwapCached: 0 kB' 'Active: 462660 kB' 'Inactive: 2553668 kB' 'Active(anon): 130620 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553668 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 121716 kB' 'Mapped: 48692 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 160176 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78488 kB' 'KernelStack: 6440 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.762 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.763 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 6797944 kB' 'MemAvailable: 9487332 kB' 'Buffers: 2436 kB' 'Cached: 2893748 kB' 'SwapCached: 0 kB' 'Active: 462676 kB' 'Inactive: 2553668 kB' 'Active(anon): 130636 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553668 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 121736 kB' 'Mapped: 48560 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 160192 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78504 kB' 'KernelStack: 6448 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.764 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.765 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:01.766 nr_hugepages=1024 00:05:01.766 resv_hugepages=0 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:01.766 surplus_hugepages=0 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:01.766 anon_hugepages=0 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 6798984 kB' 'MemAvailable: 9488372 kB' 'Buffers: 2436 kB' 'Cached: 2893748 kB' 'SwapCached: 0 kB' 'Active: 462968 kB' 'Inactive: 2553668 kB' 'Active(anon): 130928 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553668 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 122172 kB' 'Mapped: 49340 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 160192 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78504 kB' 'KernelStack: 6496 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 351548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.766 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:01.767 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.033 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 6798804 kB' 'MemUsed: 5443156 kB' 'SwapCached: 0 kB' 'Active: 462832 kB' 'Inactive: 2553664 kB' 'Active(anon): 130792 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553664 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'FilePages: 2896180 kB' 'Mapped: 48760 kB' 'AnonPages: 122032 kB' 'Shmem: 10476 kB' 'KernelStack: 6532 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81688 kB' 'Slab: 160184 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.034 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:02.035 node0=1024 expecting 1024 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:02.035 00:05:02.035 real 0m0.657s 00:05:02.035 user 0m0.328s 00:05:02.035 sys 0m0.372s 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.035 ************************************ 00:05:02.035 END TEST even_2G_alloc 00:05:02.035 ************************************ 00:05:02.035 05:51:53 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:02.035 05:51:53 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:02.035 05:51:53 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:02.035 05:51:53 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.035 05:51:53 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.035 05:51:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:02.035 ************************************ 00:05:02.035 START TEST odd_alloc 00:05:02.035 ************************************ 00:05:02.035 05:51:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:05:02.035 05:51:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:02.035 05:51:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:02.035 05:51:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:02.035 05:51:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:02.035 05:51:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:02.035 05:51:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:02.035 05:51:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:02.035 05:51:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:02.035 05:51:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:02.035 05:51:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:02.035 05:51:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:02.035 05:51:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:02.035 05:51:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:02.035 05:51:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:02.035 05:51:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:02.035 05:51:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:02.035 05:51:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:02.035 05:51:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:02.035 05:51:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:02.035 05:51:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:02.035 05:51:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:02.035 05:51:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:02.035 05:51:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.035 05:51:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:02.292 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:02.555 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:02.555 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:02.555 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:02.555 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:02.555 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:02.555 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:02.555 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:02.555 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:02.555 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:02.555 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:02.555 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:02.555 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:02.555 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:02.555 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:02.555 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:02.555 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:02.555 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.555 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.555 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.555 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.555 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.555 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.555 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.555 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.555 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 6794480 kB' 'MemAvailable: 9483868 kB' 'Buffers: 2436 kB' 'Cached: 2893748 kB' 'SwapCached: 0 kB' 'Active: 463208 kB' 'Inactive: 2553668 kB' 'Active(anon): 131168 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553668 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 128 kB' 'Writeback: 0 kB' 'AnonPages: 122264 kB' 'Mapped: 49252 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 160148 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78460 kB' 'KernelStack: 6504 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 348748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:02.555 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.555 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.555 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.555 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.555 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.555 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.556 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 6794480 kB' 'MemAvailable: 9483872 kB' 'Buffers: 2436 kB' 'Cached: 2893752 kB' 'SwapCached: 0 kB' 'Active: 463392 kB' 'Inactive: 2553672 kB' 'Active(anon): 131352 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553672 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 128 kB' 'Writeback: 0 kB' 'AnonPages: 122224 kB' 'Mapped: 49192 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 160148 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78460 kB' 'KernelStack: 6504 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 349116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.557 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.558 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 6794228 kB' 'MemAvailable: 9483620 kB' 'Buffers: 2436 kB' 'Cached: 2893752 kB' 'SwapCached: 0 kB' 'Active: 462924 kB' 'Inactive: 2553672 kB' 'Active(anon): 130884 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553672 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 128 kB' 'Writeback: 0 kB' 'AnonPages: 121988 kB' 'Mapped: 48704 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 160176 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78488 kB' 'KernelStack: 6468 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 349116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.559 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:02.560 nr_hugepages=1025 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:02.560 resv_hugepages=0 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:02.560 surplus_hugepages=0 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:02.560 anon_hugepages=0 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.560 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 6794228 kB' 'MemAvailable: 9483620 kB' 'Buffers: 2436 kB' 'Cached: 2893752 kB' 'SwapCached: 0 kB' 'Active: 462672 kB' 'Inactive: 2553672 kB' 'Active(anon): 130632 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553672 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 128 kB' 'Writeback: 0 kB' 'AnonPages: 121740 kB' 'Mapped: 48704 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 160168 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78480 kB' 'KernelStack: 6468 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 349116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.561 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 6794228 kB' 'MemUsed: 5447732 kB' 'SwapCached: 0 kB' 'Active: 462864 kB' 'Inactive: 2553672 kB' 'Active(anon): 130824 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553672 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 128 kB' 'Writeback: 0 kB' 'FilePages: 2896188 kB' 'Mapped: 48704 kB' 'AnonPages: 121672 kB' 'Shmem: 10476 kB' 'KernelStack: 6452 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81688 kB' 'Slab: 160168 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78480 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.562 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:02.563 node0=1025 expecting 1025 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:02.563 00:05:02.563 real 0m0.675s 00:05:02.563 user 0m0.312s 00:05:02.563 sys 0m0.410s 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.563 05:51:54 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:02.563 ************************************ 00:05:02.563 END TEST odd_alloc 00:05:02.563 ************************************ 00:05:02.822 05:51:54 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:02.822 05:51:54 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:02.823 05:51:54 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.823 05:51:54 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.823 05:51:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:02.823 ************************************ 00:05:02.823 START TEST custom_alloc 00:05:02.823 ************************************ 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.823 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:03.082 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:03.082 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.082 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.082 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.082 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7836500 kB' 'MemAvailable: 10525892 kB' 'Buffers: 2436 kB' 'Cached: 2893752 kB' 'SwapCached: 0 kB' 'Active: 463044 kB' 'Inactive: 2553672 kB' 'Active(anon): 131004 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553672 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 122108 kB' 'Mapped: 48608 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 160172 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78484 kB' 'KernelStack: 6420 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 349116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.344 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7836500 kB' 'MemAvailable: 10525892 kB' 'Buffers: 2436 kB' 'Cached: 2893752 kB' 'SwapCached: 0 kB' 'Active: 462848 kB' 'Inactive: 2553672 kB' 'Active(anon): 130808 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553672 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121868 kB' 'Mapped: 48608 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 160164 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78476 kB' 'KernelStack: 6404 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 349116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.345 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7836776 kB' 'MemAvailable: 10526168 kB' 'Buffers: 2436 kB' 'Cached: 2893752 kB' 'SwapCached: 0 kB' 'Active: 462696 kB' 'Inactive: 2553672 kB' 'Active(anon): 130656 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553672 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121768 kB' 'Mapped: 48564 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 160172 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78484 kB' 'KernelStack: 6432 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 349116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.346 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:03.347 nr_hugepages=512 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:03.347 resv_hugepages=0 00:05:03.347 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:03.348 surplus_hugepages=0 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:03.348 anon_hugepages=0 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7836776 kB' 'MemAvailable: 10526168 kB' 'Buffers: 2436 kB' 'Cached: 2893752 kB' 'SwapCached: 0 kB' 'Active: 462664 kB' 'Inactive: 2553672 kB' 'Active(anon): 130624 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553672 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121732 kB' 'Mapped: 48564 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 160172 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78484 kB' 'KernelStack: 6416 kB' 'PageTables: 4028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 349116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.348 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7836776 kB' 'MemUsed: 4405184 kB' 'SwapCached: 0 kB' 'Active: 462720 kB' 'Inactive: 2553672 kB' 'Active(anon): 130680 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553672 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 2896188 kB' 'Mapped: 48564 kB' 'AnonPages: 121780 kB' 'Shmem: 10476 kB' 'KernelStack: 6448 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81688 kB' 'Slab: 160172 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.349 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.350 node0=512 expecting 512 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:03.350 00:05:03.350 real 0m0.675s 00:05:03.350 user 0m0.321s 00:05:03.350 sys 0m0.397s 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.350 05:51:54 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:03.350 ************************************ 00:05:03.350 END TEST custom_alloc 00:05:03.350 ************************************ 00:05:03.350 05:51:55 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:03.350 05:51:55 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:03.350 05:51:55 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.350 05:51:55 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.350 05:51:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:03.350 ************************************ 00:05:03.350 START TEST no_shrink_alloc 00:05:03.350 ************************************ 00:05:03.350 05:51:55 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:05:03.350 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:03.350 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:03.350 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:03.350 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:03.350 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:03.350 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:03.350 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:03.350 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:03.350 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:03.350 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:03.350 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:03.350 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:03.350 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:03.350 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:03.350 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:03.350 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:03.350 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:03.350 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:03.350 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:03.350 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:03.350 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.350 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:03.921 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:03.921 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.921 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.921 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.921 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:03.921 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:03.921 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:03.921 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:03.921 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:03.921 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:03.921 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:03.921 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 6788744 kB' 'MemAvailable: 9478136 kB' 'Buffers: 2436 kB' 'Cached: 2893752 kB' 'SwapCached: 0 kB' 'Active: 463176 kB' 'Inactive: 2553672 kB' 'Active(anon): 131136 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553672 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122492 kB' 'Mapped: 48700 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 160172 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78484 kB' 'KernelStack: 6516 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.922 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 6788884 kB' 'MemAvailable: 9478276 kB' 'Buffers: 2436 kB' 'Cached: 2893752 kB' 'SwapCached: 0 kB' 'Active: 462512 kB' 'Inactive: 2553672 kB' 'Active(anon): 130472 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553672 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121872 kB' 'Mapped: 48568 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 160180 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78492 kB' 'KernelStack: 6464 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.923 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.924 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 6788884 kB' 'MemAvailable: 9478276 kB' 'Buffers: 2436 kB' 'Cached: 2893752 kB' 'SwapCached: 0 kB' 'Active: 462484 kB' 'Inactive: 2553672 kB' 'Active(anon): 130444 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553672 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121808 kB' 'Mapped: 48568 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 160180 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78492 kB' 'KernelStack: 6448 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.925 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.926 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:03.927 nr_hugepages=1024 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:03.927 resv_hugepages=0 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:03.927 surplus_hugepages=0 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:03.927 anon_hugepages=0 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.927 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.187 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.187 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.187 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 6797128 kB' 'MemAvailable: 9486520 kB' 'Buffers: 2436 kB' 'Cached: 2893752 kB' 'SwapCached: 0 kB' 'Active: 459256 kB' 'Inactive: 2553672 kB' 'Active(anon): 127216 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553672 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 118576 kB' 'Mapped: 47828 kB' 'Shmem: 10476 kB' 'KReclaimable: 81688 kB' 'Slab: 160140 kB' 'SReclaimable: 81688 kB' 'SUnreclaim: 78452 kB' 'KernelStack: 6368 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 336276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.188 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 6797304 kB' 'MemUsed: 5444656 kB' 'SwapCached: 0 kB' 'Active: 459284 kB' 'Inactive: 2553672 kB' 'Active(anon): 127244 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553672 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 2896188 kB' 'Mapped: 47828 kB' 'AnonPages: 118572 kB' 'Shmem: 10476 kB' 'KernelStack: 6368 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81676 kB' 'Slab: 160044 kB' 'SReclaimable: 81676 kB' 'SUnreclaim: 78368 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.189 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.190 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.191 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:04.191 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:04.191 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:04.191 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:04.191 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:04.191 node0=1024 expecting 1024 00:05:04.191 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:04.191 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:04.191 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:04.191 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:04.191 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:04.191 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.191 05:51:55 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:04.450 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.714 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:04.714 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:04.714 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:04.714 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:04.714 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 6797216 kB' 'MemAvailable: 9486604 kB' 'Buffers: 2436 kB' 'Cached: 2893752 kB' 'SwapCached: 0 kB' 'Active: 459896 kB' 'Inactive: 2553672 kB' 'Active(anon): 127856 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553672 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118976 kB' 'Mapped: 48240 kB' 'Shmem: 10476 kB' 'KReclaimable: 81676 kB' 'Slab: 159944 kB' 'SReclaimable: 81676 kB' 'SUnreclaim: 78268 kB' 'KernelStack: 6320 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 336276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.714 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.715 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 6796964 kB' 'MemAvailable: 9486352 kB' 'Buffers: 2436 kB' 'Cached: 2893752 kB' 'SwapCached: 0 kB' 'Active: 459452 kB' 'Inactive: 2553672 kB' 'Active(anon): 127412 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553672 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118580 kB' 'Mapped: 47824 kB' 'Shmem: 10476 kB' 'KReclaimable: 81676 kB' 'Slab: 159940 kB' 'SReclaimable: 81676 kB' 'SUnreclaim: 78264 kB' 'KernelStack: 6384 kB' 'PageTables: 3768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 336276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.716 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.717 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 6796964 kB' 'MemAvailable: 9486352 kB' 'Buffers: 2436 kB' 'Cached: 2893752 kB' 'SwapCached: 0 kB' 'Active: 459420 kB' 'Inactive: 2553672 kB' 'Active(anon): 127380 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553672 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118540 kB' 'Mapped: 47824 kB' 'Shmem: 10476 kB' 'KReclaimable: 81676 kB' 'Slab: 159940 kB' 'SReclaimable: 81676 kB' 'SUnreclaim: 78264 kB' 'KernelStack: 6368 kB' 'PageTables: 3716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 336276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.718 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:04.719 nr_hugepages=1024 00:05:04.719 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:04.719 resv_hugepages=0 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:04.720 surplus_hugepages=0 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:04.720 anon_hugepages=0 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 6796964 kB' 'MemAvailable: 9486352 kB' 'Buffers: 2436 kB' 'Cached: 2893752 kB' 'SwapCached: 0 kB' 'Active: 459168 kB' 'Inactive: 2553672 kB' 'Active(anon): 127128 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553672 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118516 kB' 'Mapped: 47824 kB' 'Shmem: 10476 kB' 'KReclaimable: 81676 kB' 'Slab: 159940 kB' 'SReclaimable: 81676 kB' 'SUnreclaim: 78264 kB' 'KernelStack: 6336 kB' 'PageTables: 3608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 336276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.720 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.721 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 6796964 kB' 'MemUsed: 5444996 kB' 'SwapCached: 0 kB' 'Active: 459228 kB' 'Inactive: 2553672 kB' 'Active(anon): 127188 kB' 'Inactive(anon): 0 kB' 'Active(file): 332040 kB' 'Inactive(file): 2553672 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 2896188 kB' 'Mapped: 47824 kB' 'AnonPages: 118576 kB' 'Shmem: 10476 kB' 'KernelStack: 6368 kB' 'PageTables: 3712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 81676 kB' 'Slab: 159940 kB' 'SReclaimable: 81676 kB' 'SUnreclaim: 78264 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.722 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.723 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.723 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.723 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.723 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.723 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.723 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.723 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.723 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.723 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.723 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.723 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.723 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.723 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.723 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.723 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.723 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.723 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.723 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.723 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:04.723 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:04.723 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:04.723 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:04.723 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:04.723 node0=1024 expecting 1024 00:05:04.723 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:04.723 05:51:56 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:04.723 00:05:04.723 real 0m1.370s 00:05:04.723 user 0m0.621s 00:05:04.723 sys 0m0.806s 00:05:04.723 05:51:56 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.723 05:51:56 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:04.723 ************************************ 00:05:04.723 END TEST no_shrink_alloc 00:05:04.723 ************************************ 00:05:04.723 05:51:56 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:04.723 05:51:56 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:04.723 05:51:56 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:04.723 05:51:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:04.723 05:51:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:04.723 05:51:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:04.723 05:51:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:04.723 05:51:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:04.984 05:51:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:04.984 05:51:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:04.984 00:05:04.984 real 0m5.817s 00:05:04.984 user 0m2.675s 00:05:04.984 sys 0m3.334s 00:05:04.984 05:51:56 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.984 05:51:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:04.984 ************************************ 00:05:04.984 END TEST hugepages 00:05:04.984 ************************************ 00:05:04.984 05:51:56 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:04.984 05:51:56 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:04.984 05:51:56 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.984 05:51:56 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.984 05:51:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:04.984 ************************************ 00:05:04.984 START TEST driver 00:05:04.984 ************************************ 00:05:04.984 05:51:56 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:04.984 * Looking for test storage... 00:05:04.984 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:04.984 05:51:56 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:04.984 05:51:56 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:04.984 05:51:56 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:11.550 05:52:02 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:11.550 05:52:02 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.550 05:52:02 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.550 05:52:02 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:11.550 ************************************ 00:05:11.550 START TEST guess_driver 00:05:11.550 ************************************ 00:05:11.550 05:52:02 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:11.550 05:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:11.550 05:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:11.550 05:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:11.550 05:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:11.550 05:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:11.550 05:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:11.550 05:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:11.550 05:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:11.550 05:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:11.550 05:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:11.550 05:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:11.550 05:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:11.550 05:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:11.550 05:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:11.550 05:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:11.550 05:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:11.550 05:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:11.551 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:11.551 05:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:11.551 05:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:11.551 05:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:11.551 Looking for driver=uio_pci_generic 00:05:11.551 05:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:11.551 05:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:11.551 05:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:11.551 05:52:02 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.551 05:52:02 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:11.551 05:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:11.551 05:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:11.551 05:52:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.116 05:52:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.116 05:52:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:12.116 05:52:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.116 05:52:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.116 05:52:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:12.116 05:52:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.116 05:52:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.116 05:52:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:12.116 05:52:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.116 05:52:03 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.116 05:52:03 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:12.116 05:52:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.116 05:52:03 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:12.116 05:52:03 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:12.116 05:52:03 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:12.116 05:52:03 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:18.677 00:05:18.677 real 0m7.098s 00:05:18.677 user 0m0.787s 00:05:18.677 sys 0m1.402s 00:05:18.677 05:52:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.677 ************************************ 00:05:18.677 05:52:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:18.677 END TEST guess_driver 00:05:18.677 ************************************ 00:05:18.677 05:52:09 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:18.677 00:05:18.677 real 0m13.134s 00:05:18.677 user 0m1.116s 00:05:18.677 sys 0m2.201s 00:05:18.677 05:52:09 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.677 ************************************ 00:05:18.677 END TEST driver 00:05:18.677 05:52:09 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:18.677 ************************************ 00:05:18.677 05:52:09 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:18.677 05:52:09 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:18.677 05:52:09 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.677 05:52:09 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.677 05:52:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:18.677 ************************************ 00:05:18.677 START TEST devices 00:05:18.677 ************************************ 00:05:18.677 05:52:09 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:18.677 * Looking for test storage... 00:05:18.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:18.677 05:52:09 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:18.677 05:52:09 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:18.677 05:52:09 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:18.677 05:52:09 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:19.243 05:52:10 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:19.243 05:52:10 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:19.243 05:52:10 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:19.243 05:52:10 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:19.243 05:52:10 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:19.243 05:52:10 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:19.243 05:52:10 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:19.243 05:52:10 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:19.243 05:52:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:19.243 05:52:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:19.243 05:52:10 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:19.243 05:52:10 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:19.243 05:52:10 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:19.243 05:52:10 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:19.243 05:52:10 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:19.243 No valid GPT data, bailing 00:05:19.243 05:52:10 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:19.243 05:52:10 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:19.243 05:52:10 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:19.243 05:52:10 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:19.243 05:52:10 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:19.243 05:52:10 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:19.243 05:52:10 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:19.243 05:52:10 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:19.243 05:52:10 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:19.243 05:52:10 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:19.243 05:52:10 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:19.243 05:52:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:19.243 05:52:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:19.243 05:52:10 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:19.243 05:52:10 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:19.243 05:52:10 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:19.243 05:52:10 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:19.243 05:52:10 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:19.502 No valid GPT data, bailing 00:05:19.502 05:52:10 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:19.502 05:52:11 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:19.502 05:52:11 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:19.502 05:52:11 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:19.502 05:52:11 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:19.502 05:52:11 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:05:19.502 05:52:11 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:05:19.502 05:52:11 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:05:19.502 No valid GPT data, bailing 00:05:19.502 05:52:11 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:19.502 05:52:11 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:19.502 05:52:11 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:05:19.502 05:52:11 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:05:19.502 05:52:11 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:05:19.502 05:52:11 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:05:19.502 05:52:11 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:05:19.502 05:52:11 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:05:19.502 No valid GPT data, bailing 00:05:19.502 05:52:11 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:19.502 05:52:11 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:19.502 05:52:11 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:05:19.502 05:52:11 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:05:19.502 05:52:11 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:05:19.502 05:52:11 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:19.502 05:52:11 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:05:19.502 05:52:11 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:05:19.502 05:52:11 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:05:19.761 No valid GPT data, bailing 00:05:19.761 05:52:11 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:19.761 05:52:11 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:19.761 05:52:11 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:19.761 05:52:11 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:05:19.761 05:52:11 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:05:19.761 05:52:11 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:05:19.761 05:52:11 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:19.761 05:52:11 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:19.761 05:52:11 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:19.761 05:52:11 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:19.761 05:52:11 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:19.761 05:52:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:05:19.761 05:52:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:05:19.762 05:52:11 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:05:19.762 05:52:11 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:05:19.762 05:52:11 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:05:19.762 05:52:11 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:05:19.762 05:52:11 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:05:19.762 No valid GPT data, bailing 00:05:19.762 05:52:11 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:19.762 05:52:11 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:19.762 05:52:11 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:19.762 05:52:11 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:05:19.762 05:52:11 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:05:19.762 05:52:11 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:05:19.762 05:52:11 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:05:19.762 05:52:11 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:05:19.762 05:52:11 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:05:19.762 05:52:11 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:19.762 05:52:11 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:19.762 05:52:11 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.762 05:52:11 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.762 05:52:11 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:19.762 ************************************ 00:05:19.762 START TEST nvme_mount 00:05:19.762 ************************************ 00:05:19.762 05:52:11 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:19.762 05:52:11 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:19.762 05:52:11 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:19.762 05:52:11 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:19.762 05:52:11 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:19.762 05:52:11 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:19.762 05:52:11 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:19.762 05:52:11 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:19.762 05:52:11 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:19.762 05:52:11 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:19.762 05:52:11 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:19.762 05:52:11 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:19.762 05:52:11 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:19.762 05:52:11 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:19.762 05:52:11 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:19.762 05:52:11 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:19.762 05:52:11 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:19.762 05:52:11 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:19.762 05:52:11 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:19.762 05:52:11 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:20.698 Creating new GPT entries in memory. 00:05:20.698 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:20.698 other utilities. 00:05:20.698 05:52:12 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:20.698 05:52:12 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:20.698 05:52:12 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:20.698 05:52:12 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:20.698 05:52:12 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:22.114 Creating new GPT entries in memory. 00:05:22.114 The operation has completed successfully. 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 71483 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:22.114 05:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.372 05:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:22.372 05:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.372 05:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:22.372 05:52:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.630 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:22.630 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.888 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:22.888 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:22.888 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.888 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:22.888 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:22.888 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:22.888 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.888 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:22.888 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.888 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:22.888 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:22.888 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:22.888 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:23.146 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:23.146 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:23.146 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:23.146 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:23.146 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:23.146 05:52:14 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:23.146 05:52:14 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.146 05:52:14 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:23.146 05:52:14 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:23.146 05:52:14 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.146 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:23.146 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:23.146 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:23.146 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.146 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:23.146 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:23.146 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:23.146 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:23.146 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:23.146 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:23.146 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:23.146 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.146 05:52:14 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.146 05:52:14 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:23.405 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:23.405 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:23.405 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:23.405 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.405 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:23.405 05:52:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.405 05:52:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:23.405 05:52:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.663 05:52:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:23.663 05:52:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.663 05:52:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:23.663 05:52:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.921 05:52:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:23.921 05:52:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.181 05:52:15 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:24.181 05:52:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:24.181 05:52:15 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.181 05:52:15 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:24.181 05:52:15 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:24.181 05:52:15 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.181 05:52:15 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:24.181 05:52:15 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:24.181 05:52:15 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:24.181 05:52:15 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:24.181 05:52:15 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:24.181 05:52:15 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:24.181 05:52:15 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:24.181 05:52:15 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:24.181 05:52:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.181 05:52:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:24.181 05:52:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:24.181 05:52:15 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.181 05:52:15 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:24.440 05:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:24.440 05:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:24.440 05:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:24.440 05:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.440 05:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:24.440 05:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.440 05:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:24.440 05:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.699 05:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:24.699 05:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.699 05:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:24.699 05:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.958 05:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:24.958 05:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.217 05:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:25.217 05:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:25.217 05:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:25.217 05:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:25.217 05:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.217 05:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:25.217 05:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:25.217 05:52:16 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:25.217 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:25.217 00:05:25.217 real 0m5.434s 00:05:25.217 user 0m1.535s 00:05:25.217 sys 0m1.573s 00:05:25.217 05:52:16 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.217 05:52:16 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:25.217 ************************************ 00:05:25.217 END TEST nvme_mount 00:05:25.217 ************************************ 00:05:25.217 05:52:16 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:25.217 05:52:16 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:25.217 05:52:16 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.217 05:52:16 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.217 05:52:16 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:25.217 ************************************ 00:05:25.217 START TEST dm_mount 00:05:25.217 ************************************ 00:05:25.217 05:52:16 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:25.217 05:52:16 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:25.217 05:52:16 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:25.217 05:52:16 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:25.217 05:52:16 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:25.217 05:52:16 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:25.217 05:52:16 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:25.217 05:52:16 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:25.217 05:52:16 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:25.217 05:52:16 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:25.217 05:52:16 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:25.217 05:52:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:25.217 05:52:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:25.217 05:52:16 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:25.217 05:52:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:25.217 05:52:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:25.217 05:52:16 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:25.217 05:52:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:25.217 05:52:16 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:25.217 05:52:16 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:25.218 05:52:16 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:25.218 05:52:16 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:26.154 Creating new GPT entries in memory. 00:05:26.154 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:26.154 other utilities. 00:05:26.154 05:52:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:26.154 05:52:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:26.154 05:52:17 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:26.154 05:52:17 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:26.154 05:52:17 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:27.531 Creating new GPT entries in memory. 00:05:27.531 The operation has completed successfully. 00:05:27.531 05:52:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:27.531 05:52:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:27.531 05:52:18 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:27.531 05:52:18 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:27.531 05:52:18 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:28.467 The operation has completed successfully. 00:05:28.467 05:52:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:28.467 05:52:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:28.467 05:52:19 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 72108 00:05:28.467 05:52:19 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:28.467 05:52:19 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:28.467 05:52:19 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:28.467 05:52:19 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:28.467 05:52:19 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:28.467 05:52:19 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:28.467 05:52:19 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:28.467 05:52:19 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:28.467 05:52:19 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:28.467 05:52:19 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:28.467 05:52:19 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:28.467 05:52:19 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:28.467 05:52:19 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:28.467 05:52:19 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:28.467 05:52:19 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:28.467 05:52:19 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:28.467 05:52:19 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:28.467 05:52:19 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:28.467 05:52:19 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:28.467 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:28.467 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:28.467 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:28.467 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:28.467 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:28.467 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:28.467 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:28.467 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:28.467 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:28.467 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.467 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:28.467 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:28.467 05:52:20 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.467 05:52:20 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:28.726 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:28.726 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:28.726 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:28.726 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.726 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:28.726 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.726 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:28.726 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.984 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:28.984 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.984 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:28.984 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.242 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:29.242 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.500 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:29.500 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:29.500 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:29.500 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:29.500 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:29.500 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:29.500 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:29.500 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:29.500 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:29.500 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:29.500 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:29.500 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:29.500 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:29.500 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:29.500 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.500 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:29.500 05:52:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:29.500 05:52:20 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.500 05:52:20 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:29.500 05:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:29.500 05:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:29.500 05:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:29.500 05:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.500 05:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:29.500 05:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.758 05:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:29.758 05:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.758 05:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:29.758 05:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.758 05:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:29.758 05:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.016 05:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:30.016 05:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.274 05:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:30.274 05:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:30.274 05:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:30.274 05:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:30.274 05:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:30.274 05:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:30.274 05:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:30.274 05:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:30.274 05:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:30.274 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:30.274 05:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:30.274 05:52:21 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:30.532 00:05:30.532 real 0m5.175s 00:05:30.532 user 0m0.989s 00:05:30.532 sys 0m1.101s 00:05:30.532 05:52:22 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.532 05:52:22 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:30.532 ************************************ 00:05:30.532 END TEST dm_mount 00:05:30.532 ************************************ 00:05:30.532 05:52:22 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:30.532 05:52:22 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:30.532 05:52:22 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:30.532 05:52:22 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:30.532 05:52:22 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:30.532 05:52:22 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:30.532 05:52:22 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:30.532 05:52:22 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:30.791 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:30.791 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:30.791 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:30.791 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:30.791 05:52:22 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:30.791 05:52:22 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:30.791 05:52:22 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:30.791 05:52:22 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:30.791 05:52:22 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:30.791 05:52:22 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:30.791 05:52:22 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:30.791 ************************************ 00:05:30.791 END TEST devices 00:05:30.791 ************************************ 00:05:30.791 00:05:30.791 real 0m12.679s 00:05:30.791 user 0m3.448s 00:05:30.791 sys 0m3.495s 00:05:30.791 05:52:22 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.791 05:52:22 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:30.791 05:52:22 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:30.791 ************************************ 00:05:30.791 END TEST setup.sh 00:05:30.791 ************************************ 00:05:30.791 00:05:30.791 real 0m43.838s 00:05:30.791 user 0m10.426s 00:05:30.791 sys 0m13.045s 00:05:30.791 05:52:22 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.791 05:52:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:30.791 05:52:22 -- common/autotest_common.sh@1142 -- # return 0 00:05:30.791 05:52:22 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:31.359 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:31.925 Hugepages 00:05:31.925 node hugesize free / total 00:05:31.925 node0 1048576kB 0 / 0 00:05:31.925 node0 2048kB 2048 / 2048 00:05:31.925 00:05:31.925 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:31.925 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:31.925 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:32.183 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:32.183 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:32.183 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:32.183 05:52:23 -- spdk/autotest.sh@130 -- # uname -s 00:05:32.183 05:52:23 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:32.183 05:52:23 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:32.183 05:52:23 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:32.749 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:33.317 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:33.317 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:33.317 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:33.577 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:33.577 05:52:25 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:34.533 05:52:26 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:34.533 05:52:26 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:34.533 05:52:26 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:34.533 05:52:26 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:34.533 05:52:26 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:34.533 05:52:26 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:34.533 05:52:26 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:34.533 05:52:26 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:34.533 05:52:26 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:34.533 05:52:26 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:05:34.533 05:52:26 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:34.533 05:52:26 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:35.105 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.105 Waiting for block devices as requested 00:05:35.105 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:35.364 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:35.364 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:35.364 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:40.633 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:40.633 05:52:32 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:40.633 05:52:32 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:40.633 05:52:32 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:40.633 05:52:32 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:05:40.633 05:52:32 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:40.633 05:52:32 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:40.633 05:52:32 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:40.633 05:52:32 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:05:40.633 05:52:32 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:05:40.633 05:52:32 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:05:40.633 05:52:32 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:05:40.633 05:52:32 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:40.633 05:52:32 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:40.633 05:52:32 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:40.633 05:52:32 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:40.633 05:52:32 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:40.633 05:52:32 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:05:40.633 05:52:32 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:40.633 05:52:32 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:40.633 05:52:32 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:40.633 05:52:32 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:40.633 05:52:32 -- common/autotest_common.sh@1557 -- # continue 00:05:40.633 05:52:32 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:40.633 05:52:32 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:40.633 05:52:32 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:40.633 05:52:32 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:05:40.633 05:52:32 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:40.633 05:52:32 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:40.633 05:52:32 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:40.633 05:52:32 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:40.633 05:52:32 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:40.633 05:52:32 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:40.633 05:52:32 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:40.633 05:52:32 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:40.633 05:52:32 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:40.633 05:52:32 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:40.633 05:52:32 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:40.633 05:52:32 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:40.633 05:52:32 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:40.633 05:52:32 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:40.633 05:52:32 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:40.633 05:52:32 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:40.633 05:52:32 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:40.633 05:52:32 -- common/autotest_common.sh@1557 -- # continue 00:05:40.633 05:52:32 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:40.633 05:52:32 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:40.633 05:52:32 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:40.633 05:52:32 -- common/autotest_common.sh@1502 -- # grep 0000:00:12.0/nvme/nvme 00:05:40.633 05:52:32 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:40.633 05:52:32 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:40.633 05:52:32 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:40.633 05:52:32 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:05:40.633 05:52:32 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:05:40.633 05:52:32 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:05:40.633 05:52:32 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:05:40.633 05:52:32 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:40.633 05:52:32 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:40.633 05:52:32 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:40.633 05:52:32 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:40.633 05:52:32 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:40.633 05:52:32 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme2 00:05:40.633 05:52:32 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:40.633 05:52:32 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:40.633 05:52:32 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:40.633 05:52:32 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:40.633 05:52:32 -- common/autotest_common.sh@1557 -- # continue 00:05:40.633 05:52:32 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:40.633 05:52:32 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:40.633 05:52:32 -- common/autotest_common.sh@1502 -- # grep 0000:00:13.0/nvme/nvme 00:05:40.633 05:52:32 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:40.633 05:52:32 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:40.633 05:52:32 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:40.633 05:52:32 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:40.633 05:52:32 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme3 00:05:40.633 05:52:32 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme3 00:05:40.633 05:52:32 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme3 ]] 00:05:40.633 05:52:32 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme3 00:05:40.633 05:52:32 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:40.633 05:52:32 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:40.633 05:52:32 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:40.633 05:52:32 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:40.633 05:52:32 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:40.633 05:52:32 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme3 00:05:40.633 05:52:32 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:40.633 05:52:32 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:40.633 05:52:32 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:40.633 05:52:32 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:40.633 05:52:32 -- common/autotest_common.sh@1557 -- # continue 00:05:40.633 05:52:32 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:40.633 05:52:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:40.633 05:52:32 -- common/autotest_common.sh@10 -- # set +x 00:05:40.633 05:52:32 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:40.633 05:52:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:40.633 05:52:32 -- common/autotest_common.sh@10 -- # set +x 00:05:40.633 05:52:32 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:41.210 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:41.780 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:41.780 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:41.780 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:41.780 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:42.038 05:52:33 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:42.038 05:52:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:42.038 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:05:42.038 05:52:33 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:42.038 05:52:33 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:42.038 05:52:33 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:42.038 05:52:33 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:42.038 05:52:33 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:42.038 05:52:33 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:42.038 05:52:33 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:42.038 05:52:33 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:42.038 05:52:33 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:42.038 05:52:33 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:42.038 05:52:33 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:42.038 05:52:33 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:05:42.038 05:52:33 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:42.038 05:52:33 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:42.038 05:52:33 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:42.038 05:52:33 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:42.038 05:52:33 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:42.038 05:52:33 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:42.038 05:52:33 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:42.038 05:52:33 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:42.038 05:52:33 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:42.038 05:52:33 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:42.038 05:52:33 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:42.038 05:52:33 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:42.038 05:52:33 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:42.038 05:52:33 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:42.038 05:52:33 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:42.038 05:52:33 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:42.038 05:52:33 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:42.038 05:52:33 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:42.038 05:52:33 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:42.038 05:52:33 -- common/autotest_common.sh@1593 -- # return 0 00:05:42.038 05:52:33 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:42.038 05:52:33 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:42.038 05:52:33 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:42.038 05:52:33 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:42.038 05:52:33 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:42.038 05:52:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:42.038 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:05:42.038 05:52:33 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:42.038 05:52:33 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:42.038 05:52:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.038 05:52:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.038 05:52:33 -- common/autotest_common.sh@10 -- # set +x 00:05:42.038 ************************************ 00:05:42.038 START TEST env 00:05:42.038 ************************************ 00:05:42.038 05:52:33 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:42.296 * Looking for test storage... 00:05:42.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:42.297 05:52:33 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:42.297 05:52:33 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.297 05:52:33 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.297 05:52:33 env -- common/autotest_common.sh@10 -- # set +x 00:05:42.297 ************************************ 00:05:42.297 START TEST env_memory 00:05:42.297 ************************************ 00:05:42.297 05:52:33 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:42.297 00:05:42.297 00:05:42.297 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.297 http://cunit.sourceforge.net/ 00:05:42.297 00:05:42.297 00:05:42.297 Suite: memory 00:05:42.297 Test: alloc and free memory map ...[2024-07-13 05:52:33.894563] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:42.297 passed 00:05:42.297 Test: mem map translation ...[2024-07-13 05:52:33.955904] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:42.297 [2024-07-13 05:52:33.956021] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:42.297 [2024-07-13 05:52:33.956209] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:42.297 [2024-07-13 05:52:33.956277] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:42.555 passed 00:05:42.555 Test: mem map registration ...[2024-07-13 05:52:34.055731] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:42.555 [2024-07-13 05:52:34.055810] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:42.555 passed 00:05:42.555 Test: mem map adjacent registrations ...passed 00:05:42.555 00:05:42.555 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.555 suites 1 1 n/a 0 0 00:05:42.555 tests 4 4 4 0 0 00:05:42.555 asserts 152 152 152 0 n/a 00:05:42.555 00:05:42.555 Elapsed time = 0.352 seconds 00:05:42.555 ************************************ 00:05:42.555 END TEST env_memory 00:05:42.555 ************************************ 00:05:42.555 00:05:42.555 real 0m0.387s 00:05:42.555 user 0m0.359s 00:05:42.555 sys 0m0.023s 00:05:42.555 05:52:34 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.555 05:52:34 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:42.555 05:52:34 env -- common/autotest_common.sh@1142 -- # return 0 00:05:42.555 05:52:34 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:42.555 05:52:34 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.555 05:52:34 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.555 05:52:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:42.555 ************************************ 00:05:42.555 START TEST env_vtophys 00:05:42.555 ************************************ 00:05:42.555 05:52:34 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:42.813 EAL: lib.eal log level changed from notice to debug 00:05:42.813 EAL: Detected lcore 0 as core 0 on socket 0 00:05:42.813 EAL: Detected lcore 1 as core 0 on socket 0 00:05:42.813 EAL: Detected lcore 2 as core 0 on socket 0 00:05:42.813 EAL: Detected lcore 3 as core 0 on socket 0 00:05:42.813 EAL: Detected lcore 4 as core 0 on socket 0 00:05:42.813 EAL: Detected lcore 5 as core 0 on socket 0 00:05:42.813 EAL: Detected lcore 6 as core 0 on socket 0 00:05:42.813 EAL: Detected lcore 7 as core 0 on socket 0 00:05:42.813 EAL: Detected lcore 8 as core 0 on socket 0 00:05:42.813 EAL: Detected lcore 9 as core 0 on socket 0 00:05:42.813 EAL: Maximum logical cores by configuration: 128 00:05:42.813 EAL: Detected CPU lcores: 10 00:05:42.813 EAL: Detected NUMA nodes: 1 00:05:42.813 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:42.813 EAL: Detected shared linkage of DPDK 00:05:42.813 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:42.813 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:42.813 EAL: Registered [vdev] bus. 00:05:42.813 EAL: bus.vdev log level changed from disabled to notice 00:05:42.813 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:42.813 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:42.813 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:42.813 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:42.813 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:42.813 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:42.813 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:42.813 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:42.813 EAL: No shared files mode enabled, IPC will be disabled 00:05:42.813 EAL: No shared files mode enabled, IPC is disabled 00:05:42.813 EAL: Selected IOVA mode 'PA' 00:05:42.813 EAL: Probing VFIO support... 00:05:42.813 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:42.813 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:42.813 EAL: Ask a virtual area of 0x2e000 bytes 00:05:42.813 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:42.813 EAL: Setting up physically contiguous memory... 00:05:42.813 EAL: Setting maximum number of open files to 524288 00:05:42.813 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:42.813 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:42.813 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.813 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:42.813 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:42.813 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.813 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:42.813 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:42.813 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.813 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:42.813 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:42.813 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.813 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:42.813 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:42.813 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.813 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:42.813 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:42.813 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.813 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:42.813 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:42.813 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.813 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:42.813 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:42.813 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.813 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:42.813 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:42.813 EAL: Hugepages will be freed exactly as allocated. 00:05:42.813 EAL: No shared files mode enabled, IPC is disabled 00:05:42.813 EAL: No shared files mode enabled, IPC is disabled 00:05:42.813 EAL: TSC frequency is ~2200000 KHz 00:05:42.813 EAL: Main lcore 0 is ready (tid=7f1e745bca40;cpuset=[0]) 00:05:42.813 EAL: Trying to obtain current memory policy. 00:05:42.813 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.813 EAL: Restoring previous memory policy: 0 00:05:42.813 EAL: request: mp_malloc_sync 00:05:42.813 EAL: No shared files mode enabled, IPC is disabled 00:05:42.813 EAL: Heap on socket 0 was expanded by 2MB 00:05:42.813 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:42.813 EAL: No shared files mode enabled, IPC is disabled 00:05:42.813 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:42.813 EAL: Mem event callback 'spdk:(nil)' registered 00:05:42.813 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:42.813 00:05:42.813 00:05:42.813 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.813 http://cunit.sourceforge.net/ 00:05:42.813 00:05:42.813 00:05:42.813 Suite: components_suite 00:05:43.381 Test: vtophys_malloc_test ...passed 00:05:43.381 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:43.381 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.381 EAL: Restoring previous memory policy: 4 00:05:43.381 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.381 EAL: request: mp_malloc_sync 00:05:43.381 EAL: No shared files mode enabled, IPC is disabled 00:05:43.381 EAL: Heap on socket 0 was expanded by 4MB 00:05:43.381 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.381 EAL: request: mp_malloc_sync 00:05:43.381 EAL: No shared files mode enabled, IPC is disabled 00:05:43.381 EAL: Heap on socket 0 was shrunk by 4MB 00:05:43.381 EAL: Trying to obtain current memory policy. 00:05:43.381 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.381 EAL: Restoring previous memory policy: 4 00:05:43.381 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.381 EAL: request: mp_malloc_sync 00:05:43.381 EAL: No shared files mode enabled, IPC is disabled 00:05:43.381 EAL: Heap on socket 0 was expanded by 6MB 00:05:43.381 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.381 EAL: request: mp_malloc_sync 00:05:43.381 EAL: No shared files mode enabled, IPC is disabled 00:05:43.381 EAL: Heap on socket 0 was shrunk by 6MB 00:05:43.381 EAL: Trying to obtain current memory policy. 00:05:43.381 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.381 EAL: Restoring previous memory policy: 4 00:05:43.381 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.381 EAL: request: mp_malloc_sync 00:05:43.381 EAL: No shared files mode enabled, IPC is disabled 00:05:43.381 EAL: Heap on socket 0 was expanded by 10MB 00:05:43.381 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.381 EAL: request: mp_malloc_sync 00:05:43.381 EAL: No shared files mode enabled, IPC is disabled 00:05:43.381 EAL: Heap on socket 0 was shrunk by 10MB 00:05:43.381 EAL: Trying to obtain current memory policy. 00:05:43.381 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.381 EAL: Restoring previous memory policy: 4 00:05:43.381 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.381 EAL: request: mp_malloc_sync 00:05:43.381 EAL: No shared files mode enabled, IPC is disabled 00:05:43.381 EAL: Heap on socket 0 was expanded by 18MB 00:05:43.381 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.381 EAL: request: mp_malloc_sync 00:05:43.381 EAL: No shared files mode enabled, IPC is disabled 00:05:43.381 EAL: Heap on socket 0 was shrunk by 18MB 00:05:43.381 EAL: Trying to obtain current memory policy. 00:05:43.381 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.381 EAL: Restoring previous memory policy: 4 00:05:43.381 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.381 EAL: request: mp_malloc_sync 00:05:43.381 EAL: No shared files mode enabled, IPC is disabled 00:05:43.381 EAL: Heap on socket 0 was expanded by 34MB 00:05:43.381 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.381 EAL: request: mp_malloc_sync 00:05:43.381 EAL: No shared files mode enabled, IPC is disabled 00:05:43.381 EAL: Heap on socket 0 was shrunk by 34MB 00:05:43.381 EAL: Trying to obtain current memory policy. 00:05:43.381 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.381 EAL: Restoring previous memory policy: 4 00:05:43.381 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.381 EAL: request: mp_malloc_sync 00:05:43.381 EAL: No shared files mode enabled, IPC is disabled 00:05:43.381 EAL: Heap on socket 0 was expanded by 66MB 00:05:43.381 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.381 EAL: request: mp_malloc_sync 00:05:43.381 EAL: No shared files mode enabled, IPC is disabled 00:05:43.381 EAL: Heap on socket 0 was shrunk by 66MB 00:05:43.381 EAL: Trying to obtain current memory policy. 00:05:43.381 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.381 EAL: Restoring previous memory policy: 4 00:05:43.381 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.381 EAL: request: mp_malloc_sync 00:05:43.381 EAL: No shared files mode enabled, IPC is disabled 00:05:43.381 EAL: Heap on socket 0 was expanded by 130MB 00:05:43.381 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.381 EAL: request: mp_malloc_sync 00:05:43.381 EAL: No shared files mode enabled, IPC is disabled 00:05:43.381 EAL: Heap on socket 0 was shrunk by 130MB 00:05:43.381 EAL: Trying to obtain current memory policy. 00:05:43.381 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.381 EAL: Restoring previous memory policy: 4 00:05:43.381 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.381 EAL: request: mp_malloc_sync 00:05:43.381 EAL: No shared files mode enabled, IPC is disabled 00:05:43.381 EAL: Heap on socket 0 was expanded by 258MB 00:05:43.381 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.381 EAL: request: mp_malloc_sync 00:05:43.381 EAL: No shared files mode enabled, IPC is disabled 00:05:43.381 EAL: Heap on socket 0 was shrunk by 258MB 00:05:43.381 EAL: Trying to obtain current memory policy. 00:05:43.381 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.381 EAL: Restoring previous memory policy: 4 00:05:43.381 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.381 EAL: request: mp_malloc_sync 00:05:43.381 EAL: No shared files mode enabled, IPC is disabled 00:05:43.381 EAL: Heap on socket 0 was expanded by 514MB 00:05:43.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.640 EAL: request: mp_malloc_sync 00:05:43.640 EAL: No shared files mode enabled, IPC is disabled 00:05:43.640 EAL: Heap on socket 0 was shrunk by 514MB 00:05:43.640 EAL: Trying to obtain current memory policy. 00:05:43.640 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.640 EAL: Restoring previous memory policy: 4 00:05:43.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.640 EAL: request: mp_malloc_sync 00:05:43.640 EAL: No shared files mode enabled, IPC is disabled 00:05:43.640 EAL: Heap on socket 0 was expanded by 1026MB 00:05:43.898 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.898 passed 00:05:43.898 00:05:43.898 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.898 suites 1 1 n/a 0 0 00:05:43.898 tests 2 2 2 0 0 00:05:43.898 asserts 5274 5274 5274 0 n/a 00:05:43.898 00:05:43.898 Elapsed time = 1.079 seconds 00:05:43.898 EAL: request: mp_malloc_sync 00:05:43.898 EAL: No shared files mode enabled, IPC is disabled 00:05:43.898 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:43.898 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.898 EAL: request: mp_malloc_sync 00:05:43.898 EAL: No shared files mode enabled, IPC is disabled 00:05:43.898 EAL: Heap on socket 0 was shrunk by 2MB 00:05:43.898 EAL: No shared files mode enabled, IPC is disabled 00:05:43.898 EAL: No shared files mode enabled, IPC is disabled 00:05:43.898 EAL: No shared files mode enabled, IPC is disabled 00:05:43.898 00:05:43.898 real 0m1.319s 00:05:43.898 user 0m0.605s 00:05:43.898 sys 0m0.582s 00:05:43.898 ************************************ 00:05:43.898 END TEST env_vtophys 00:05:43.898 ************************************ 00:05:43.898 05:52:35 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.898 05:52:35 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:43.898 05:52:35 env -- common/autotest_common.sh@1142 -- # return 0 00:05:43.898 05:52:35 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:43.898 05:52:35 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.898 05:52:35 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.898 05:52:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.157 ************************************ 00:05:44.157 START TEST env_pci 00:05:44.157 ************************************ 00:05:44.157 05:52:35 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:44.157 00:05:44.157 00:05:44.157 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.157 http://cunit.sourceforge.net/ 00:05:44.157 00:05:44.157 00:05:44.157 Suite: pci 00:05:44.157 Test: pci_hook ...[2024-07-13 05:52:35.658817] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 73881 has claimed it 00:05:44.157 passed 00:05:44.157 00:05:44.157 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.157 suites 1 1 n/a 0 0 00:05:44.157 tests 1 1 1 0 0 00:05:44.157 asserts 25 25 25 0 n/a 00:05:44.157 00:05:44.157 Elapsed time = 0.007 seconds 00:05:44.157 EAL: Cannot find device (10000:00:01.0) 00:05:44.157 EAL: Failed to attach device on primary process 00:05:44.157 00:05:44.157 real 0m0.060s 00:05:44.157 user 0m0.034s 00:05:44.157 sys 0m0.025s 00:05:44.157 05:52:35 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.157 ************************************ 00:05:44.157 END TEST env_pci 00:05:44.157 ************************************ 00:05:44.157 05:52:35 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:44.157 05:52:35 env -- common/autotest_common.sh@1142 -- # return 0 00:05:44.157 05:52:35 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:44.157 05:52:35 env -- env/env.sh@15 -- # uname 00:05:44.157 05:52:35 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:44.157 05:52:35 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:44.157 05:52:35 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:44.157 05:52:35 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:44.157 05:52:35 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.157 05:52:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.157 ************************************ 00:05:44.157 START TEST env_dpdk_post_init 00:05:44.157 ************************************ 00:05:44.157 05:52:35 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:44.157 EAL: Detected CPU lcores: 10 00:05:44.157 EAL: Detected NUMA nodes: 1 00:05:44.157 EAL: Detected shared linkage of DPDK 00:05:44.157 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:44.157 EAL: Selected IOVA mode 'PA' 00:05:44.416 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:44.416 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:44.416 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:44.416 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:44.416 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:44.416 Starting DPDK initialization... 00:05:44.416 Starting SPDK post initialization... 00:05:44.416 SPDK NVMe probe 00:05:44.416 Attaching to 0000:00:10.0 00:05:44.416 Attaching to 0000:00:11.0 00:05:44.416 Attaching to 0000:00:12.0 00:05:44.416 Attaching to 0000:00:13.0 00:05:44.416 Attached to 0000:00:11.0 00:05:44.416 Attached to 0000:00:13.0 00:05:44.416 Attached to 0000:00:10.0 00:05:44.416 Attached to 0000:00:12.0 00:05:44.416 Cleaning up... 00:05:44.416 00:05:44.416 real 0m0.239s 00:05:44.416 user 0m0.069s 00:05:44.416 sys 0m0.072s 00:05:44.416 05:52:35 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.416 05:52:35 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:44.416 ************************************ 00:05:44.416 END TEST env_dpdk_post_init 00:05:44.416 ************************************ 00:05:44.416 05:52:36 env -- common/autotest_common.sh@1142 -- # return 0 00:05:44.416 05:52:36 env -- env/env.sh@26 -- # uname 00:05:44.416 05:52:36 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:44.416 05:52:36 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:44.416 05:52:36 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.416 05:52:36 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.416 05:52:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.416 ************************************ 00:05:44.416 START TEST env_mem_callbacks 00:05:44.416 ************************************ 00:05:44.416 05:52:36 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:44.416 EAL: Detected CPU lcores: 10 00:05:44.416 EAL: Detected NUMA nodes: 1 00:05:44.416 EAL: Detected shared linkage of DPDK 00:05:44.416 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:44.416 EAL: Selected IOVA mode 'PA' 00:05:44.675 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:44.675 00:05:44.675 00:05:44.675 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.675 http://cunit.sourceforge.net/ 00:05:44.675 00:05:44.675 00:05:44.675 Suite: memory 00:05:44.675 Test: test ... 00:05:44.675 register 0x200000200000 2097152 00:05:44.675 malloc 3145728 00:05:44.675 register 0x200000400000 4194304 00:05:44.675 buf 0x200000500000 len 3145728 PASSED 00:05:44.675 malloc 64 00:05:44.675 buf 0x2000004fff40 len 64 PASSED 00:05:44.675 malloc 4194304 00:05:44.675 register 0x200000800000 6291456 00:05:44.675 buf 0x200000a00000 len 4194304 PASSED 00:05:44.675 free 0x200000500000 3145728 00:05:44.675 free 0x2000004fff40 64 00:05:44.675 unregister 0x200000400000 4194304 PASSED 00:05:44.675 free 0x200000a00000 4194304 00:05:44.675 unregister 0x200000800000 6291456 PASSED 00:05:44.675 malloc 8388608 00:05:44.675 register 0x200000400000 10485760 00:05:44.675 buf 0x200000600000 len 8388608 PASSED 00:05:44.675 free 0x200000600000 8388608 00:05:44.675 unregister 0x200000400000 10485760 PASSED 00:05:44.675 passed 00:05:44.675 00:05:44.675 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.675 suites 1 1 n/a 0 0 00:05:44.675 tests 1 1 1 0 0 00:05:44.675 asserts 15 15 15 0 n/a 00:05:44.675 00:05:44.675 Elapsed time = 0.010 seconds 00:05:44.675 00:05:44.675 real 0m0.170s 00:05:44.675 user 0m0.025s 00:05:44.675 sys 0m0.043s 00:05:44.675 05:52:36 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.675 ************************************ 00:05:44.675 05:52:36 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:44.675 END TEST env_mem_callbacks 00:05:44.675 ************************************ 00:05:44.675 05:52:36 env -- common/autotest_common.sh@1142 -- # return 0 00:05:44.675 00:05:44.675 real 0m2.528s 00:05:44.675 user 0m1.207s 00:05:44.675 sys 0m0.960s 00:05:44.675 05:52:36 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.675 ************************************ 00:05:44.675 END TEST env 00:05:44.675 05:52:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.675 ************************************ 00:05:44.675 05:52:36 -- common/autotest_common.sh@1142 -- # return 0 00:05:44.675 05:52:36 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:44.675 05:52:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.675 05:52:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.675 05:52:36 -- common/autotest_common.sh@10 -- # set +x 00:05:44.675 ************************************ 00:05:44.675 START TEST rpc 00:05:44.675 ************************************ 00:05:44.675 05:52:36 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:44.675 * Looking for test storage... 00:05:44.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:44.675 05:52:36 rpc -- rpc/rpc.sh@65 -- # spdk_pid=73995 00:05:44.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.675 05:52:36 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:44.675 05:52:36 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.675 05:52:36 rpc -- rpc/rpc.sh@67 -- # waitforlisten 73995 00:05:44.675 05:52:36 rpc -- common/autotest_common.sh@829 -- # '[' -z 73995 ']' 00:05:44.675 05:52:36 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.675 05:52:36 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.675 05:52:36 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.676 05:52:36 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.676 05:52:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.934 [2024-07-13 05:52:36.517362] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:44.934 [2024-07-13 05:52:36.518061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73995 ] 00:05:45.193 [2024-07-13 05:52:36.670099] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.193 [2024-07-13 05:52:36.715070] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:45.193 [2024-07-13 05:52:36.715172] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 73995' to capture a snapshot of events at runtime. 00:05:45.193 [2024-07-13 05:52:36.715196] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:45.193 [2024-07-13 05:52:36.715232] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:45.193 [2024-07-13 05:52:36.715255] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid73995 for offline analysis/debug. 00:05:45.193 [2024-07-13 05:52:36.715301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.761 05:52:37 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.761 05:52:37 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:45.761 05:52:37 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:45.761 05:52:37 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:45.761 05:52:37 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:45.761 05:52:37 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:45.761 05:52:37 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.761 05:52:37 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.761 05:52:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.761 ************************************ 00:05:45.761 START TEST rpc_integrity 00:05:45.761 ************************************ 00:05:45.761 05:52:37 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:45.761 05:52:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:45.761 05:52:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.761 05:52:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.761 05:52:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.761 05:52:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:45.761 05:52:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:45.761 05:52:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:45.761 05:52:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:45.761 05:52:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.761 05:52:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.761 05:52:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.020 05:52:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:46.020 05:52:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:46.020 05:52:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.020 05:52:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.020 05:52:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.020 05:52:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:46.020 { 00:05:46.020 "name": "Malloc0", 00:05:46.020 "aliases": [ 00:05:46.020 "e1a8af72-cb7c-42a6-bb0f-81705ba2c75e" 00:05:46.020 ], 00:05:46.020 "product_name": "Malloc disk", 00:05:46.020 "block_size": 512, 00:05:46.020 "num_blocks": 16384, 00:05:46.020 "uuid": "e1a8af72-cb7c-42a6-bb0f-81705ba2c75e", 00:05:46.020 "assigned_rate_limits": { 00:05:46.020 "rw_ios_per_sec": 0, 00:05:46.020 "rw_mbytes_per_sec": 0, 00:05:46.020 "r_mbytes_per_sec": 0, 00:05:46.020 "w_mbytes_per_sec": 0 00:05:46.020 }, 00:05:46.020 "claimed": false, 00:05:46.020 "zoned": false, 00:05:46.020 "supported_io_types": { 00:05:46.020 "read": true, 00:05:46.020 "write": true, 00:05:46.020 "unmap": true, 00:05:46.020 "flush": true, 00:05:46.020 "reset": true, 00:05:46.020 "nvme_admin": false, 00:05:46.020 "nvme_io": false, 00:05:46.020 "nvme_io_md": false, 00:05:46.020 "write_zeroes": true, 00:05:46.020 "zcopy": true, 00:05:46.020 "get_zone_info": false, 00:05:46.020 "zone_management": false, 00:05:46.020 "zone_append": false, 00:05:46.020 "compare": false, 00:05:46.020 "compare_and_write": false, 00:05:46.020 "abort": true, 00:05:46.020 "seek_hole": false, 00:05:46.020 "seek_data": false, 00:05:46.020 "copy": true, 00:05:46.020 "nvme_iov_md": false 00:05:46.020 }, 00:05:46.020 "memory_domains": [ 00:05:46.020 { 00:05:46.020 "dma_device_id": "system", 00:05:46.020 "dma_device_type": 1 00:05:46.020 }, 00:05:46.020 { 00:05:46.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.020 "dma_device_type": 2 00:05:46.020 } 00:05:46.020 ], 00:05:46.020 "driver_specific": {} 00:05:46.020 } 00:05:46.020 ]' 00:05:46.020 05:52:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:46.020 05:52:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:46.020 05:52:37 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:46.020 05:52:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.020 05:52:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.020 [2024-07-13 05:52:37.565525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:46.020 [2024-07-13 05:52:37.565615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:46.020 [2024-07-13 05:52:37.565668] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:46.020 [2024-07-13 05:52:37.565690] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:46.020 [2024-07-13 05:52:37.568614] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:46.020 [2024-07-13 05:52:37.568722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:46.020 Passthru0 00:05:46.020 05:52:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.020 05:52:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:46.020 05:52:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.020 05:52:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.020 05:52:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.020 05:52:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:46.020 { 00:05:46.020 "name": "Malloc0", 00:05:46.020 "aliases": [ 00:05:46.020 "e1a8af72-cb7c-42a6-bb0f-81705ba2c75e" 00:05:46.020 ], 00:05:46.020 "product_name": "Malloc disk", 00:05:46.020 "block_size": 512, 00:05:46.020 "num_blocks": 16384, 00:05:46.020 "uuid": "e1a8af72-cb7c-42a6-bb0f-81705ba2c75e", 00:05:46.020 "assigned_rate_limits": { 00:05:46.020 "rw_ios_per_sec": 0, 00:05:46.020 "rw_mbytes_per_sec": 0, 00:05:46.020 "r_mbytes_per_sec": 0, 00:05:46.020 "w_mbytes_per_sec": 0 00:05:46.020 }, 00:05:46.020 "claimed": true, 00:05:46.020 "claim_type": "exclusive_write", 00:05:46.020 "zoned": false, 00:05:46.020 "supported_io_types": { 00:05:46.020 "read": true, 00:05:46.020 "write": true, 00:05:46.020 "unmap": true, 00:05:46.020 "flush": true, 00:05:46.020 "reset": true, 00:05:46.020 "nvme_admin": false, 00:05:46.020 "nvme_io": false, 00:05:46.020 "nvme_io_md": false, 00:05:46.020 "write_zeroes": true, 00:05:46.020 "zcopy": true, 00:05:46.020 "get_zone_info": false, 00:05:46.020 "zone_management": false, 00:05:46.020 "zone_append": false, 00:05:46.020 "compare": false, 00:05:46.020 "compare_and_write": false, 00:05:46.020 "abort": true, 00:05:46.020 "seek_hole": false, 00:05:46.020 "seek_data": false, 00:05:46.020 "copy": true, 00:05:46.020 "nvme_iov_md": false 00:05:46.020 }, 00:05:46.020 "memory_domains": [ 00:05:46.020 { 00:05:46.020 "dma_device_id": "system", 00:05:46.020 "dma_device_type": 1 00:05:46.020 }, 00:05:46.020 { 00:05:46.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.020 "dma_device_type": 2 00:05:46.020 } 00:05:46.020 ], 00:05:46.020 "driver_specific": {} 00:05:46.020 }, 00:05:46.020 { 00:05:46.020 "name": "Passthru0", 00:05:46.020 "aliases": [ 00:05:46.020 "5840402d-f8b5-5b91-9ea1-8d3e6636dc1d" 00:05:46.020 ], 00:05:46.020 "product_name": "passthru", 00:05:46.020 "block_size": 512, 00:05:46.020 "num_blocks": 16384, 00:05:46.020 "uuid": "5840402d-f8b5-5b91-9ea1-8d3e6636dc1d", 00:05:46.020 "assigned_rate_limits": { 00:05:46.020 "rw_ios_per_sec": 0, 00:05:46.020 "rw_mbytes_per_sec": 0, 00:05:46.020 "r_mbytes_per_sec": 0, 00:05:46.020 "w_mbytes_per_sec": 0 00:05:46.020 }, 00:05:46.020 "claimed": false, 00:05:46.020 "zoned": false, 00:05:46.020 "supported_io_types": { 00:05:46.020 "read": true, 00:05:46.020 "write": true, 00:05:46.020 "unmap": true, 00:05:46.020 "flush": true, 00:05:46.020 "reset": true, 00:05:46.020 "nvme_admin": false, 00:05:46.020 "nvme_io": false, 00:05:46.020 "nvme_io_md": false, 00:05:46.020 "write_zeroes": true, 00:05:46.020 "zcopy": true, 00:05:46.020 "get_zone_info": false, 00:05:46.020 "zone_management": false, 00:05:46.020 "zone_append": false, 00:05:46.020 "compare": false, 00:05:46.020 "compare_and_write": false, 00:05:46.020 "abort": true, 00:05:46.020 "seek_hole": false, 00:05:46.020 "seek_data": false, 00:05:46.020 "copy": true, 00:05:46.020 "nvme_iov_md": false 00:05:46.020 }, 00:05:46.020 "memory_domains": [ 00:05:46.020 { 00:05:46.020 "dma_device_id": "system", 00:05:46.020 "dma_device_type": 1 00:05:46.020 }, 00:05:46.020 { 00:05:46.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.020 "dma_device_type": 2 00:05:46.020 } 00:05:46.020 ], 00:05:46.020 "driver_specific": { 00:05:46.020 "passthru": { 00:05:46.020 "name": "Passthru0", 00:05:46.020 "base_bdev_name": "Malloc0" 00:05:46.020 } 00:05:46.020 } 00:05:46.020 } 00:05:46.020 ]' 00:05:46.020 05:52:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:46.020 05:52:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:46.020 05:52:37 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:46.020 05:52:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.020 05:52:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.020 05:52:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.020 05:52:37 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:46.020 05:52:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.020 05:52:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.020 05:52:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.020 05:52:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:46.020 05:52:37 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.020 05:52:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.020 05:52:37 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.020 05:52:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:46.020 05:52:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:46.021 05:52:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:46.021 00:05:46.021 real 0m0.335s 00:05:46.021 user 0m0.231s 00:05:46.021 sys 0m0.040s 00:05:46.021 05:52:37 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.021 ************************************ 00:05:46.021 END TEST rpc_integrity 00:05:46.021 ************************************ 00:05:46.021 05:52:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.280 05:52:37 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:46.280 05:52:37 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:46.280 05:52:37 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.280 05:52:37 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.280 05:52:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.280 ************************************ 00:05:46.280 START TEST rpc_plugins 00:05:46.280 ************************************ 00:05:46.280 05:52:37 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:46.280 05:52:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:46.280 05:52:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.280 05:52:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.280 05:52:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.280 05:52:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:46.280 05:52:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:46.280 05:52:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.280 05:52:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.280 05:52:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.280 05:52:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:46.280 { 00:05:46.280 "name": "Malloc1", 00:05:46.280 "aliases": [ 00:05:46.280 "49accebd-b5bb-4e8b-9ebe-4945e221c032" 00:05:46.280 ], 00:05:46.280 "product_name": "Malloc disk", 00:05:46.280 "block_size": 4096, 00:05:46.280 "num_blocks": 256, 00:05:46.280 "uuid": "49accebd-b5bb-4e8b-9ebe-4945e221c032", 00:05:46.280 "assigned_rate_limits": { 00:05:46.280 "rw_ios_per_sec": 0, 00:05:46.280 "rw_mbytes_per_sec": 0, 00:05:46.280 "r_mbytes_per_sec": 0, 00:05:46.280 "w_mbytes_per_sec": 0 00:05:46.280 }, 00:05:46.280 "claimed": false, 00:05:46.280 "zoned": false, 00:05:46.280 "supported_io_types": { 00:05:46.280 "read": true, 00:05:46.280 "write": true, 00:05:46.280 "unmap": true, 00:05:46.280 "flush": true, 00:05:46.280 "reset": true, 00:05:46.280 "nvme_admin": false, 00:05:46.280 "nvme_io": false, 00:05:46.280 "nvme_io_md": false, 00:05:46.280 "write_zeroes": true, 00:05:46.280 "zcopy": true, 00:05:46.280 "get_zone_info": false, 00:05:46.280 "zone_management": false, 00:05:46.280 "zone_append": false, 00:05:46.280 "compare": false, 00:05:46.280 "compare_and_write": false, 00:05:46.280 "abort": true, 00:05:46.280 "seek_hole": false, 00:05:46.280 "seek_data": false, 00:05:46.280 "copy": true, 00:05:46.280 "nvme_iov_md": false 00:05:46.280 }, 00:05:46.280 "memory_domains": [ 00:05:46.280 { 00:05:46.280 "dma_device_id": "system", 00:05:46.280 "dma_device_type": 1 00:05:46.280 }, 00:05:46.280 { 00:05:46.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.280 "dma_device_type": 2 00:05:46.280 } 00:05:46.280 ], 00:05:46.280 "driver_specific": {} 00:05:46.280 } 00:05:46.280 ]' 00:05:46.280 05:52:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:46.280 05:52:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:46.280 05:52:37 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:46.280 05:52:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.280 05:52:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.280 05:52:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.280 05:52:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:46.280 05:52:37 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.280 05:52:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.280 05:52:37 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.280 05:52:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:46.280 05:52:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:46.280 05:52:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:46.280 00:05:46.280 real 0m0.171s 00:05:46.280 user 0m0.116s 00:05:46.280 sys 0m0.019s 00:05:46.280 05:52:37 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.280 ************************************ 00:05:46.280 END TEST rpc_plugins 00:05:46.280 05:52:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.280 ************************************ 00:05:46.539 05:52:38 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:46.539 05:52:38 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:46.539 05:52:38 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.539 05:52:38 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.539 05:52:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.539 ************************************ 00:05:46.539 START TEST rpc_trace_cmd_test 00:05:46.539 ************************************ 00:05:46.539 05:52:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:46.539 05:52:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:46.539 05:52:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:46.539 05:52:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.539 05:52:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:46.539 05:52:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.539 05:52:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:46.539 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid73995", 00:05:46.539 "tpoint_group_mask": "0x8", 00:05:46.539 "iscsi_conn": { 00:05:46.539 "mask": "0x2", 00:05:46.539 "tpoint_mask": "0x0" 00:05:46.539 }, 00:05:46.539 "scsi": { 00:05:46.539 "mask": "0x4", 00:05:46.539 "tpoint_mask": "0x0" 00:05:46.539 }, 00:05:46.539 "bdev": { 00:05:46.539 "mask": "0x8", 00:05:46.539 "tpoint_mask": "0xffffffffffffffff" 00:05:46.539 }, 00:05:46.539 "nvmf_rdma": { 00:05:46.539 "mask": "0x10", 00:05:46.539 "tpoint_mask": "0x0" 00:05:46.539 }, 00:05:46.539 "nvmf_tcp": { 00:05:46.539 "mask": "0x20", 00:05:46.539 "tpoint_mask": "0x0" 00:05:46.539 }, 00:05:46.539 "ftl": { 00:05:46.539 "mask": "0x40", 00:05:46.539 "tpoint_mask": "0x0" 00:05:46.539 }, 00:05:46.539 "blobfs": { 00:05:46.539 "mask": "0x80", 00:05:46.539 "tpoint_mask": "0x0" 00:05:46.539 }, 00:05:46.539 "dsa": { 00:05:46.539 "mask": "0x200", 00:05:46.539 "tpoint_mask": "0x0" 00:05:46.539 }, 00:05:46.539 "thread": { 00:05:46.539 "mask": "0x400", 00:05:46.539 "tpoint_mask": "0x0" 00:05:46.539 }, 00:05:46.539 "nvme_pcie": { 00:05:46.539 "mask": "0x800", 00:05:46.539 "tpoint_mask": "0x0" 00:05:46.539 }, 00:05:46.539 "iaa": { 00:05:46.539 "mask": "0x1000", 00:05:46.539 "tpoint_mask": "0x0" 00:05:46.539 }, 00:05:46.539 "nvme_tcp": { 00:05:46.539 "mask": "0x2000", 00:05:46.539 "tpoint_mask": "0x0" 00:05:46.539 }, 00:05:46.539 "bdev_nvme": { 00:05:46.539 "mask": "0x4000", 00:05:46.539 "tpoint_mask": "0x0" 00:05:46.539 }, 00:05:46.539 "sock": { 00:05:46.539 "mask": "0x8000", 00:05:46.539 "tpoint_mask": "0x0" 00:05:46.539 } 00:05:46.539 }' 00:05:46.539 05:52:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:46.539 05:52:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:46.539 05:52:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:46.539 05:52:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:46.539 05:52:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:46.539 05:52:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:46.539 05:52:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:46.539 05:52:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:46.539 05:52:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:46.799 05:52:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:46.799 00:05:46.799 real 0m0.286s 00:05:46.799 user 0m0.248s 00:05:46.799 sys 0m0.028s 00:05:46.799 05:52:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.799 05:52:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:46.799 ************************************ 00:05:46.799 END TEST rpc_trace_cmd_test 00:05:46.799 ************************************ 00:05:46.799 05:52:38 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:46.799 05:52:38 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:46.799 05:52:38 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:46.799 05:52:38 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:46.799 05:52:38 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.799 05:52:38 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.799 05:52:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.799 ************************************ 00:05:46.799 START TEST rpc_daemon_integrity 00:05:46.799 ************************************ 00:05:46.799 05:52:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:46.799 05:52:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:46.799 05:52:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.799 05:52:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.799 05:52:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.799 05:52:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:46.799 05:52:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:46.799 05:52:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:46.799 05:52:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:46.799 05:52:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.799 05:52:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.799 05:52:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.799 05:52:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:46.799 05:52:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:46.799 05:52:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.799 05:52:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.799 05:52:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.799 05:52:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:46.799 { 00:05:46.799 "name": "Malloc2", 00:05:46.799 "aliases": [ 00:05:46.799 "0694b2c1-a46e-4bf9-b339-48afa0235753" 00:05:46.799 ], 00:05:46.799 "product_name": "Malloc disk", 00:05:46.799 "block_size": 512, 00:05:46.799 "num_blocks": 16384, 00:05:46.799 "uuid": "0694b2c1-a46e-4bf9-b339-48afa0235753", 00:05:46.799 "assigned_rate_limits": { 00:05:46.799 "rw_ios_per_sec": 0, 00:05:46.799 "rw_mbytes_per_sec": 0, 00:05:46.799 "r_mbytes_per_sec": 0, 00:05:46.799 "w_mbytes_per_sec": 0 00:05:46.799 }, 00:05:46.799 "claimed": false, 00:05:46.799 "zoned": false, 00:05:46.799 "supported_io_types": { 00:05:46.799 "read": true, 00:05:46.799 "write": true, 00:05:46.799 "unmap": true, 00:05:46.799 "flush": true, 00:05:46.799 "reset": true, 00:05:46.799 "nvme_admin": false, 00:05:46.799 "nvme_io": false, 00:05:46.799 "nvme_io_md": false, 00:05:46.799 "write_zeroes": true, 00:05:46.799 "zcopy": true, 00:05:46.799 "get_zone_info": false, 00:05:46.799 "zone_management": false, 00:05:46.799 "zone_append": false, 00:05:46.799 "compare": false, 00:05:46.799 "compare_and_write": false, 00:05:46.799 "abort": true, 00:05:46.799 "seek_hole": false, 00:05:46.799 "seek_data": false, 00:05:46.799 "copy": true, 00:05:46.799 "nvme_iov_md": false 00:05:46.799 }, 00:05:46.799 "memory_domains": [ 00:05:46.799 { 00:05:46.799 "dma_device_id": "system", 00:05:46.799 "dma_device_type": 1 00:05:46.799 }, 00:05:46.799 { 00:05:46.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.799 "dma_device_type": 2 00:05:46.799 } 00:05:46.799 ], 00:05:46.799 "driver_specific": {} 00:05:46.799 } 00:05:46.799 ]' 00:05:46.799 05:52:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:46.799 05:52:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:46.799 05:52:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:46.799 05:52:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.799 05:52:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.799 [2024-07-13 05:52:38.506290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:46.799 [2024-07-13 05:52:38.506378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:46.799 [2024-07-13 05:52:38.506408] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:46.799 [2024-07-13 05:52:38.506425] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:46.799 [2024-07-13 05:52:38.508922] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:46.799 [2024-07-13 05:52:38.509018] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:46.799 Passthru0 00:05:46.799 05:52:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.799 05:52:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:46.799 05:52:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.799 05:52:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.059 05:52:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.059 05:52:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:47.059 { 00:05:47.059 "name": "Malloc2", 00:05:47.059 "aliases": [ 00:05:47.059 "0694b2c1-a46e-4bf9-b339-48afa0235753" 00:05:47.059 ], 00:05:47.059 "product_name": "Malloc disk", 00:05:47.059 "block_size": 512, 00:05:47.059 "num_blocks": 16384, 00:05:47.059 "uuid": "0694b2c1-a46e-4bf9-b339-48afa0235753", 00:05:47.059 "assigned_rate_limits": { 00:05:47.059 "rw_ios_per_sec": 0, 00:05:47.059 "rw_mbytes_per_sec": 0, 00:05:47.059 "r_mbytes_per_sec": 0, 00:05:47.059 "w_mbytes_per_sec": 0 00:05:47.059 }, 00:05:47.059 "claimed": true, 00:05:47.059 "claim_type": "exclusive_write", 00:05:47.059 "zoned": false, 00:05:47.059 "supported_io_types": { 00:05:47.059 "read": true, 00:05:47.059 "write": true, 00:05:47.059 "unmap": true, 00:05:47.059 "flush": true, 00:05:47.059 "reset": true, 00:05:47.059 "nvme_admin": false, 00:05:47.059 "nvme_io": false, 00:05:47.059 "nvme_io_md": false, 00:05:47.059 "write_zeroes": true, 00:05:47.059 "zcopy": true, 00:05:47.059 "get_zone_info": false, 00:05:47.059 "zone_management": false, 00:05:47.059 "zone_append": false, 00:05:47.059 "compare": false, 00:05:47.059 "compare_and_write": false, 00:05:47.059 "abort": true, 00:05:47.059 "seek_hole": false, 00:05:47.059 "seek_data": false, 00:05:47.059 "copy": true, 00:05:47.059 "nvme_iov_md": false 00:05:47.059 }, 00:05:47.059 "memory_domains": [ 00:05:47.059 { 00:05:47.059 "dma_device_id": "system", 00:05:47.059 "dma_device_type": 1 00:05:47.059 }, 00:05:47.059 { 00:05:47.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.059 "dma_device_type": 2 00:05:47.059 } 00:05:47.059 ], 00:05:47.059 "driver_specific": {} 00:05:47.059 }, 00:05:47.059 { 00:05:47.059 "name": "Passthru0", 00:05:47.059 "aliases": [ 00:05:47.059 "3bfd07b9-2b7f-5a61-8d0d-928589a56e46" 00:05:47.059 ], 00:05:47.059 "product_name": "passthru", 00:05:47.059 "block_size": 512, 00:05:47.059 "num_blocks": 16384, 00:05:47.059 "uuid": "3bfd07b9-2b7f-5a61-8d0d-928589a56e46", 00:05:47.059 "assigned_rate_limits": { 00:05:47.059 "rw_ios_per_sec": 0, 00:05:47.059 "rw_mbytes_per_sec": 0, 00:05:47.059 "r_mbytes_per_sec": 0, 00:05:47.059 "w_mbytes_per_sec": 0 00:05:47.059 }, 00:05:47.059 "claimed": false, 00:05:47.059 "zoned": false, 00:05:47.059 "supported_io_types": { 00:05:47.059 "read": true, 00:05:47.059 "write": true, 00:05:47.059 "unmap": true, 00:05:47.059 "flush": true, 00:05:47.059 "reset": true, 00:05:47.059 "nvme_admin": false, 00:05:47.059 "nvme_io": false, 00:05:47.059 "nvme_io_md": false, 00:05:47.059 "write_zeroes": true, 00:05:47.059 "zcopy": true, 00:05:47.059 "get_zone_info": false, 00:05:47.059 "zone_management": false, 00:05:47.059 "zone_append": false, 00:05:47.059 "compare": false, 00:05:47.059 "compare_and_write": false, 00:05:47.059 "abort": true, 00:05:47.059 "seek_hole": false, 00:05:47.059 "seek_data": false, 00:05:47.059 "copy": true, 00:05:47.059 "nvme_iov_md": false 00:05:47.059 }, 00:05:47.059 "memory_domains": [ 00:05:47.059 { 00:05:47.059 "dma_device_id": "system", 00:05:47.059 "dma_device_type": 1 00:05:47.059 }, 00:05:47.059 { 00:05:47.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.059 "dma_device_type": 2 00:05:47.059 } 00:05:47.059 ], 00:05:47.059 "driver_specific": { 00:05:47.059 "passthru": { 00:05:47.059 "name": "Passthru0", 00:05:47.059 "base_bdev_name": "Malloc2" 00:05:47.059 } 00:05:47.059 } 00:05:47.059 } 00:05:47.059 ]' 00:05:47.059 05:52:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:47.059 05:52:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:47.059 05:52:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:47.059 05:52:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.059 05:52:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.059 05:52:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.059 05:52:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:47.059 05:52:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.059 05:52:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.059 05:52:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.059 05:52:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:47.059 05:52:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.059 05:52:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.059 05:52:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.059 05:52:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:47.059 05:52:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:47.059 05:52:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:47.059 00:05:47.059 real 0m0.321s 00:05:47.059 user 0m0.219s 00:05:47.059 sys 0m0.041s 00:05:47.059 05:52:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.059 05:52:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.059 ************************************ 00:05:47.059 END TEST rpc_daemon_integrity 00:05:47.059 ************************************ 00:05:47.059 05:52:38 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:47.059 05:52:38 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:47.059 05:52:38 rpc -- rpc/rpc.sh@84 -- # killprocess 73995 00:05:47.059 05:52:38 rpc -- common/autotest_common.sh@948 -- # '[' -z 73995 ']' 00:05:47.059 05:52:38 rpc -- common/autotest_common.sh@952 -- # kill -0 73995 00:05:47.059 05:52:38 rpc -- common/autotest_common.sh@953 -- # uname 00:05:47.059 05:52:38 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:47.059 05:52:38 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73995 00:05:47.059 05:52:38 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:47.059 killing process with pid 73995 00:05:47.059 05:52:38 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:47.059 05:52:38 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73995' 00:05:47.059 05:52:38 rpc -- common/autotest_common.sh@967 -- # kill 73995 00:05:47.059 05:52:38 rpc -- common/autotest_common.sh@972 -- # wait 73995 00:05:47.628 00:05:47.628 real 0m2.739s 00:05:47.628 user 0m3.592s 00:05:47.628 sys 0m0.672s 00:05:47.628 05:52:39 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.628 05:52:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.628 ************************************ 00:05:47.628 END TEST rpc 00:05:47.628 ************************************ 00:05:47.628 05:52:39 -- common/autotest_common.sh@1142 -- # return 0 00:05:47.628 05:52:39 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:47.628 05:52:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.628 05:52:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.628 05:52:39 -- common/autotest_common.sh@10 -- # set +x 00:05:47.628 ************************************ 00:05:47.628 START TEST skip_rpc 00:05:47.628 ************************************ 00:05:47.628 05:52:39 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:47.628 * Looking for test storage... 00:05:47.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:47.628 05:52:39 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:47.628 05:52:39 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:47.628 05:52:39 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:47.628 05:52:39 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.628 05:52:39 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.628 05:52:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.628 ************************************ 00:05:47.628 START TEST skip_rpc 00:05:47.628 ************************************ 00:05:47.628 05:52:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:47.628 05:52:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=74188 00:05:47.628 05:52:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.628 05:52:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:47.628 05:52:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:47.628 [2024-07-13 05:52:39.327357] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:47.628 [2024-07-13 05:52:39.327544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74188 ] 00:05:47.887 [2024-07-13 05:52:39.476760] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.887 [2024-07-13 05:52:39.511155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 74188 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 74188 ']' 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 74188 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74188 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:53.160 killing process with pid 74188 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74188' 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 74188 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 74188 00:05:53.160 00:05:53.160 real 0m5.314s 00:05:53.160 user 0m4.965s 00:05:53.160 sys 0m0.246s 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.160 05:52:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.160 ************************************ 00:05:53.160 END TEST skip_rpc 00:05:53.160 ************************************ 00:05:53.160 05:52:44 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:53.160 05:52:44 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:53.160 05:52:44 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.160 05:52:44 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.160 05:52:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.160 ************************************ 00:05:53.160 START TEST skip_rpc_with_json 00:05:53.160 ************************************ 00:05:53.160 05:52:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:53.160 05:52:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:53.161 05:52:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=74276 00:05:53.161 05:52:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.161 05:52:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 74276 00:05:53.161 05:52:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.161 05:52:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 74276 ']' 00:05:53.161 05:52:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.161 05:52:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.161 05:52:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.161 05:52:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.161 05:52:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.161 [2024-07-13 05:52:44.690039] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:53.161 [2024-07-13 05:52:44.690226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74276 ] 00:05:53.161 [2024-07-13 05:52:44.837073] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.161 [2024-07-13 05:52:44.870605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.099 05:52:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.099 05:52:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:54.099 05:52:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:54.099 05:52:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.099 05:52:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.099 [2024-07-13 05:52:45.579463] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:54.099 request: 00:05:54.099 { 00:05:54.099 "trtype": "tcp", 00:05:54.099 "method": "nvmf_get_transports", 00:05:54.099 "req_id": 1 00:05:54.099 } 00:05:54.099 Got JSON-RPC error response 00:05:54.099 response: 00:05:54.099 { 00:05:54.099 "code": -19, 00:05:54.099 "message": "No such device" 00:05:54.099 } 00:05:54.099 05:52:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:54.099 05:52:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:54.099 05:52:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.099 05:52:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.099 [2024-07-13 05:52:45.591623] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:54.099 05:52:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.099 05:52:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:54.099 05:52:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.099 05:52:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.099 05:52:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.099 05:52:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:54.099 { 00:05:54.099 "subsystems": [ 00:05:54.099 { 00:05:54.099 "subsystem": "keyring", 00:05:54.099 "config": [] 00:05:54.099 }, 00:05:54.099 { 00:05:54.099 "subsystem": "iobuf", 00:05:54.099 "config": [ 00:05:54.099 { 00:05:54.099 "method": "iobuf_set_options", 00:05:54.099 "params": { 00:05:54.099 "small_pool_count": 8192, 00:05:54.099 "large_pool_count": 1024, 00:05:54.099 "small_bufsize": 8192, 00:05:54.099 "large_bufsize": 135168 00:05:54.099 } 00:05:54.099 } 00:05:54.099 ] 00:05:54.099 }, 00:05:54.099 { 00:05:54.099 "subsystem": "sock", 00:05:54.099 "config": [ 00:05:54.099 { 00:05:54.099 "method": "sock_set_default_impl", 00:05:54.099 "params": { 00:05:54.099 "impl_name": "posix" 00:05:54.099 } 00:05:54.099 }, 00:05:54.099 { 00:05:54.099 "method": "sock_impl_set_options", 00:05:54.099 "params": { 00:05:54.099 "impl_name": "ssl", 00:05:54.099 "recv_buf_size": 4096, 00:05:54.099 "send_buf_size": 4096, 00:05:54.099 "enable_recv_pipe": true, 00:05:54.099 "enable_quickack": false, 00:05:54.099 "enable_placement_id": 0, 00:05:54.099 "enable_zerocopy_send_server": true, 00:05:54.099 "enable_zerocopy_send_client": false, 00:05:54.099 "zerocopy_threshold": 0, 00:05:54.099 "tls_version": 0, 00:05:54.099 "enable_ktls": false 00:05:54.099 } 00:05:54.099 }, 00:05:54.099 { 00:05:54.099 "method": "sock_impl_set_options", 00:05:54.099 "params": { 00:05:54.099 "impl_name": "posix", 00:05:54.099 "recv_buf_size": 2097152, 00:05:54.099 "send_buf_size": 2097152, 00:05:54.099 "enable_recv_pipe": true, 00:05:54.099 "enable_quickack": false, 00:05:54.099 "enable_placement_id": 0, 00:05:54.099 "enable_zerocopy_send_server": true, 00:05:54.099 "enable_zerocopy_send_client": false, 00:05:54.099 "zerocopy_threshold": 0, 00:05:54.099 "tls_version": 0, 00:05:54.099 "enable_ktls": false 00:05:54.099 } 00:05:54.099 } 00:05:54.099 ] 00:05:54.099 }, 00:05:54.099 { 00:05:54.099 "subsystem": "vmd", 00:05:54.099 "config": [] 00:05:54.099 }, 00:05:54.099 { 00:05:54.099 "subsystem": "accel", 00:05:54.099 "config": [ 00:05:54.099 { 00:05:54.099 "method": "accel_set_options", 00:05:54.099 "params": { 00:05:54.099 "small_cache_size": 128, 00:05:54.099 "large_cache_size": 16, 00:05:54.099 "task_count": 2048, 00:05:54.099 "sequence_count": 2048, 00:05:54.099 "buf_count": 2048 00:05:54.099 } 00:05:54.099 } 00:05:54.099 ] 00:05:54.099 }, 00:05:54.099 { 00:05:54.099 "subsystem": "bdev", 00:05:54.099 "config": [ 00:05:54.099 { 00:05:54.099 "method": "bdev_set_options", 00:05:54.099 "params": { 00:05:54.099 "bdev_io_pool_size": 65535, 00:05:54.099 "bdev_io_cache_size": 256, 00:05:54.099 "bdev_auto_examine": true, 00:05:54.099 "iobuf_small_cache_size": 128, 00:05:54.099 "iobuf_large_cache_size": 16 00:05:54.099 } 00:05:54.099 }, 00:05:54.099 { 00:05:54.099 "method": "bdev_raid_set_options", 00:05:54.099 "params": { 00:05:54.099 "process_window_size_kb": 1024 00:05:54.099 } 00:05:54.100 }, 00:05:54.100 { 00:05:54.100 "method": "bdev_iscsi_set_options", 00:05:54.100 "params": { 00:05:54.100 "timeout_sec": 30 00:05:54.100 } 00:05:54.100 }, 00:05:54.100 { 00:05:54.100 "method": "bdev_nvme_set_options", 00:05:54.100 "params": { 00:05:54.100 "action_on_timeout": "none", 00:05:54.100 "timeout_us": 0, 00:05:54.100 "timeout_admin_us": 0, 00:05:54.100 "keep_alive_timeout_ms": 10000, 00:05:54.100 "arbitration_burst": 0, 00:05:54.100 "low_priority_weight": 0, 00:05:54.100 "medium_priority_weight": 0, 00:05:54.100 "high_priority_weight": 0, 00:05:54.100 "nvme_adminq_poll_period_us": 10000, 00:05:54.100 "nvme_ioq_poll_period_us": 0, 00:05:54.100 "io_queue_requests": 0, 00:05:54.100 "delay_cmd_submit": true, 00:05:54.100 "transport_retry_count": 4, 00:05:54.100 "bdev_retry_count": 3, 00:05:54.100 "transport_ack_timeout": 0, 00:05:54.100 "ctrlr_loss_timeout_sec": 0, 00:05:54.100 "reconnect_delay_sec": 0, 00:05:54.100 "fast_io_fail_timeout_sec": 0, 00:05:54.100 "disable_auto_failback": false, 00:05:54.100 "generate_uuids": false, 00:05:54.100 "transport_tos": 0, 00:05:54.100 "nvme_error_stat": false, 00:05:54.100 "rdma_srq_size": 0, 00:05:54.100 "io_path_stat": false, 00:05:54.100 "allow_accel_sequence": false, 00:05:54.100 "rdma_max_cq_size": 0, 00:05:54.100 "rdma_cm_event_timeout_ms": 0, 00:05:54.100 "dhchap_digests": [ 00:05:54.100 "sha256", 00:05:54.100 "sha384", 00:05:54.100 "sha512" 00:05:54.100 ], 00:05:54.100 "dhchap_dhgroups": [ 00:05:54.100 "null", 00:05:54.100 "ffdhe2048", 00:05:54.100 "ffdhe3072", 00:05:54.100 "ffdhe4096", 00:05:54.100 "ffdhe6144", 00:05:54.100 "ffdhe8192" 00:05:54.100 ] 00:05:54.100 } 00:05:54.100 }, 00:05:54.100 { 00:05:54.100 "method": "bdev_nvme_set_hotplug", 00:05:54.100 "params": { 00:05:54.100 "period_us": 100000, 00:05:54.100 "enable": false 00:05:54.100 } 00:05:54.100 }, 00:05:54.100 { 00:05:54.100 "method": "bdev_wait_for_examine" 00:05:54.100 } 00:05:54.100 ] 00:05:54.100 }, 00:05:54.100 { 00:05:54.100 "subsystem": "scsi", 00:05:54.100 "config": null 00:05:54.100 }, 00:05:54.100 { 00:05:54.100 "subsystem": "scheduler", 00:05:54.100 "config": [ 00:05:54.100 { 00:05:54.100 "method": "framework_set_scheduler", 00:05:54.100 "params": { 00:05:54.100 "name": "static" 00:05:54.100 } 00:05:54.100 } 00:05:54.100 ] 00:05:54.100 }, 00:05:54.100 { 00:05:54.100 "subsystem": "vhost_scsi", 00:05:54.100 "config": [] 00:05:54.100 }, 00:05:54.100 { 00:05:54.100 "subsystem": "vhost_blk", 00:05:54.100 "config": [] 00:05:54.100 }, 00:05:54.100 { 00:05:54.100 "subsystem": "ublk", 00:05:54.100 "config": [] 00:05:54.100 }, 00:05:54.100 { 00:05:54.100 "subsystem": "nbd", 00:05:54.100 "config": [] 00:05:54.100 }, 00:05:54.100 { 00:05:54.100 "subsystem": "nvmf", 00:05:54.100 "config": [ 00:05:54.100 { 00:05:54.100 "method": "nvmf_set_config", 00:05:54.100 "params": { 00:05:54.100 "discovery_filter": "match_any", 00:05:54.100 "admin_cmd_passthru": { 00:05:54.100 "identify_ctrlr": false 00:05:54.100 } 00:05:54.100 } 00:05:54.100 }, 00:05:54.100 { 00:05:54.100 "method": "nvmf_set_max_subsystems", 00:05:54.100 "params": { 00:05:54.100 "max_subsystems": 1024 00:05:54.100 } 00:05:54.100 }, 00:05:54.100 { 00:05:54.100 "method": "nvmf_set_crdt", 00:05:54.100 "params": { 00:05:54.100 "crdt1": 0, 00:05:54.100 "crdt2": 0, 00:05:54.100 "crdt3": 0 00:05:54.100 } 00:05:54.100 }, 00:05:54.100 { 00:05:54.100 "method": "nvmf_create_transport", 00:05:54.100 "params": { 00:05:54.100 "trtype": "TCP", 00:05:54.100 "max_queue_depth": 128, 00:05:54.100 "max_io_qpairs_per_ctrlr": 127, 00:05:54.100 "in_capsule_data_size": 4096, 00:05:54.100 "max_io_size": 131072, 00:05:54.100 "io_unit_size": 131072, 00:05:54.100 "max_aq_depth": 128, 00:05:54.100 "num_shared_buffers": 511, 00:05:54.100 "buf_cache_size": 4294967295, 00:05:54.100 "dif_insert_or_strip": false, 00:05:54.100 "zcopy": false, 00:05:54.100 "c2h_success": true, 00:05:54.100 "sock_priority": 0, 00:05:54.100 "abort_timeout_sec": 1, 00:05:54.100 "ack_timeout": 0, 00:05:54.100 "data_wr_pool_size": 0 00:05:54.100 } 00:05:54.100 } 00:05:54.100 ] 00:05:54.100 }, 00:05:54.100 { 00:05:54.100 "subsystem": "iscsi", 00:05:54.100 "config": [ 00:05:54.100 { 00:05:54.100 "method": "iscsi_set_options", 00:05:54.100 "params": { 00:05:54.100 "node_base": "iqn.2016-06.io.spdk", 00:05:54.100 "max_sessions": 128, 00:05:54.100 "max_connections_per_session": 2, 00:05:54.100 "max_queue_depth": 64, 00:05:54.100 "default_time2wait": 2, 00:05:54.100 "default_time2retain": 20, 00:05:54.100 "first_burst_length": 8192, 00:05:54.100 "immediate_data": true, 00:05:54.100 "allow_duplicated_isid": false, 00:05:54.100 "error_recovery_level": 0, 00:05:54.100 "nop_timeout": 60, 00:05:54.100 "nop_in_interval": 30, 00:05:54.100 "disable_chap": false, 00:05:54.100 "require_chap": false, 00:05:54.100 "mutual_chap": false, 00:05:54.100 "chap_group": 0, 00:05:54.100 "max_large_datain_per_connection": 64, 00:05:54.100 "max_r2t_per_connection": 4, 00:05:54.100 "pdu_pool_size": 36864, 00:05:54.100 "immediate_data_pool_size": 16384, 00:05:54.100 "data_out_pool_size": 2048 00:05:54.100 } 00:05:54.100 } 00:05:54.100 ] 00:05:54.100 } 00:05:54.100 ] 00:05:54.100 } 00:05:54.100 05:52:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:54.100 05:52:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 74276 00:05:54.100 05:52:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 74276 ']' 00:05:54.100 05:52:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 74276 00:05:54.100 05:52:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:54.100 05:52:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.100 05:52:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74276 00:05:54.100 05:52:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.100 05:52:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.100 05:52:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74276' 00:05:54.100 killing process with pid 74276 00:05:54.100 05:52:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 74276 00:05:54.100 05:52:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 74276 00:05:54.359 05:52:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=74304 00:05:54.359 05:52:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:54.359 05:52:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:59.629 05:52:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 74304 00:05:59.629 05:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 74304 ']' 00:05:59.629 05:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 74304 00:05:59.629 05:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:59.629 05:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.629 05:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74304 00:05:59.629 05:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.629 05:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.629 killing process with pid 74304 00:05:59.629 05:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74304' 00:05:59.629 05:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 74304 00:05:59.629 05:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 74304 00:05:59.888 05:52:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:59.888 05:52:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:59.888 00:05:59.888 real 0m6.797s 00:05:59.888 user 0m6.579s 00:05:59.888 sys 0m0.550s 00:05:59.888 05:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.888 05:52:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:59.888 ************************************ 00:05:59.888 END TEST skip_rpc_with_json 00:05:59.888 ************************************ 00:05:59.888 05:52:51 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:59.888 05:52:51 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:59.888 05:52:51 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.888 05:52:51 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.888 05:52:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.888 ************************************ 00:05:59.888 START TEST skip_rpc_with_delay 00:05:59.888 ************************************ 00:05:59.888 05:52:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:59.888 05:52:51 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.888 05:52:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:59.888 05:52:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.888 05:52:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.888 05:52:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.888 05:52:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.888 05:52:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.888 05:52:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.888 05:52:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.888 05:52:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.888 05:52:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:59.888 05:52:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.888 [2024-07-13 05:52:51.534764] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:59.888 [2024-07-13 05:52:51.534937] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:59.888 05:52:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:59.888 05:52:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:59.888 05:52:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:59.888 05:52:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:59.888 00:05:59.888 real 0m0.165s 00:05:59.888 user 0m0.105s 00:05:59.888 sys 0m0.058s 00:05:59.888 05:52:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.888 05:52:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:59.888 ************************************ 00:05:59.888 END TEST skip_rpc_with_delay 00:05:59.888 ************************************ 00:06:00.148 05:52:51 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:00.148 05:52:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:00.148 05:52:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:00.148 05:52:51 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:00.148 05:52:51 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.148 05:52:51 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.148 05:52:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.148 ************************************ 00:06:00.148 START TEST exit_on_failed_rpc_init 00:06:00.148 ************************************ 00:06:00.148 05:52:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:00.148 05:52:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=74416 00:06:00.148 05:52:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 74416 00:06:00.148 05:52:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 74416 ']' 00:06:00.148 05:52:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.148 05:52:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.148 05:52:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.148 05:52:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.148 05:52:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:00.148 05:52:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.148 [2024-07-13 05:52:51.761425] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:00.148 [2024-07-13 05:52:51.761587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74416 ] 00:06:00.407 [2024-07-13 05:52:51.909063] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.407 [2024-07-13 05:52:51.942614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.975 05:52:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.975 05:52:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:00.975 05:52:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.975 05:52:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:00.975 05:52:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:00.975 05:52:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:00.975 05:52:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.975 05:52:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.975 05:52:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.975 05:52:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.975 05:52:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.975 05:52:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.975 05:52:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.975 05:52:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:00.975 05:52:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:01.233 [2024-07-13 05:52:52.730490] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:01.234 [2024-07-13 05:52:52.730667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74428 ] 00:06:01.234 [2024-07-13 05:52:52.881053] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.234 [2024-07-13 05:52:52.922107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.234 [2024-07-13 05:52:52.922259] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:01.234 [2024-07-13 05:52:52.922290] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:01.234 [2024-07-13 05:52:52.922312] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:01.502 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:01.502 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:01.502 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:01.502 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:01.502 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:01.502 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:01.502 05:52:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:01.502 05:52:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 74416 00:06:01.502 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 74416 ']' 00:06:01.502 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 74416 00:06:01.502 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:01.502 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.502 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74416 00:06:01.502 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.502 killing process with pid 74416 00:06:01.502 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.502 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74416' 00:06:01.502 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 74416 00:06:01.502 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 74416 00:06:01.762 00:06:01.762 real 0m1.712s 00:06:01.762 user 0m1.986s 00:06:01.762 sys 0m0.401s 00:06:01.762 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.762 05:52:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:01.762 ************************************ 00:06:01.762 END TEST exit_on_failed_rpc_init 00:06:01.762 ************************************ 00:06:01.762 05:52:53 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:01.762 05:52:53 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:01.762 00:06:01.762 real 0m14.298s 00:06:01.762 user 0m13.737s 00:06:01.762 sys 0m1.440s 00:06:01.762 05:52:53 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.762 ************************************ 00:06:01.762 END TEST skip_rpc 00:06:01.762 ************************************ 00:06:01.762 05:52:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.762 05:52:53 -- common/autotest_common.sh@1142 -- # return 0 00:06:01.762 05:52:53 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:01.762 05:52:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.762 05:52:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.762 05:52:53 -- common/autotest_common.sh@10 -- # set +x 00:06:01.762 ************************************ 00:06:01.762 START TEST rpc_client 00:06:01.762 ************************************ 00:06:01.762 05:52:53 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:02.021 * Looking for test storage... 00:06:02.021 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:02.021 05:52:53 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:02.021 OK 00:06:02.021 05:52:53 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:02.021 00:06:02.021 real 0m0.134s 00:06:02.021 user 0m0.070s 00:06:02.021 sys 0m0.070s 00:06:02.021 05:52:53 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.021 05:52:53 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:02.021 ************************************ 00:06:02.021 END TEST rpc_client 00:06:02.021 ************************************ 00:06:02.021 05:52:53 -- common/autotest_common.sh@1142 -- # return 0 00:06:02.021 05:52:53 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:02.021 05:52:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.021 05:52:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.021 05:52:53 -- common/autotest_common.sh@10 -- # set +x 00:06:02.021 ************************************ 00:06:02.021 START TEST json_config 00:06:02.021 ************************************ 00:06:02.021 05:52:53 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:02.021 05:52:53 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:02.021 05:52:53 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:02.022 05:52:53 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:02.022 05:52:53 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:02.022 05:52:53 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:02.022 05:52:53 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:02.022 05:52:53 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:02.022 05:52:53 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:02.022 05:52:53 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:02.022 05:52:53 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:02.022 05:52:53 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:02.022 05:52:53 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:02.022 05:52:53 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a816761f-03ef-42fc-91d8-b7286b6eff78 00:06:02.022 05:52:53 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a816761f-03ef-42fc-91d8-b7286b6eff78 00:06:02.022 05:52:53 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:02.022 05:52:53 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:02.022 05:52:53 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:02.022 05:52:53 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:02.022 05:52:53 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:02.022 05:52:53 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.022 05:52:53 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.022 05:52:53 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.022 05:52:53 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.022 05:52:53 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.022 05:52:53 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.022 05:52:53 json_config -- paths/export.sh@5 -- # export PATH 00:06:02.022 05:52:53 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.022 05:52:53 json_config -- nvmf/common.sh@47 -- # : 0 00:06:02.022 05:52:53 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:02.022 05:52:53 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:02.022 05:52:53 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:02.022 05:52:53 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.022 05:52:53 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.022 05:52:53 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:02.022 05:52:53 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:02.022 05:52:53 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:02.022 05:52:53 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:02.022 05:52:53 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:02.022 05:52:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:02.022 05:52:53 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:02.022 05:52:53 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:02.022 WARNING: No tests are enabled so not running JSON configuration tests 00:06:02.022 05:52:53 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:02.022 05:52:53 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:02.022 00:06:02.022 real 0m0.073s 00:06:02.022 user 0m0.033s 00:06:02.022 sys 0m0.040s 00:06:02.022 05:52:53 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.022 05:52:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.022 ************************************ 00:06:02.022 END TEST json_config 00:06:02.022 ************************************ 00:06:02.022 05:52:53 -- common/autotest_common.sh@1142 -- # return 0 00:06:02.022 05:52:53 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:02.022 05:52:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.022 05:52:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.022 05:52:53 -- common/autotest_common.sh@10 -- # set +x 00:06:02.282 ************************************ 00:06:02.282 START TEST json_config_extra_key 00:06:02.282 ************************************ 00:06:02.282 05:52:53 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:02.282 05:52:53 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:02.282 05:52:53 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:02.282 05:52:53 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:02.282 05:52:53 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:02.282 05:52:53 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:02.282 05:52:53 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:02.282 05:52:53 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:02.282 05:52:53 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:02.282 05:52:53 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:02.282 05:52:53 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:02.282 05:52:53 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:02.282 05:52:53 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:02.282 05:52:53 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a816761f-03ef-42fc-91d8-b7286b6eff78 00:06:02.282 05:52:53 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a816761f-03ef-42fc-91d8-b7286b6eff78 00:06:02.282 05:52:53 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:02.282 05:52:53 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:02.282 05:52:53 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:02.282 05:52:53 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:02.282 05:52:53 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:02.282 05:52:53 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.282 05:52:53 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.282 05:52:53 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.282 05:52:53 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.282 05:52:53 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.282 05:52:53 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.282 05:52:53 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:02.282 05:52:53 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.282 05:52:53 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:02.282 05:52:53 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:02.282 05:52:53 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:02.282 05:52:53 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:02.282 05:52:53 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.282 05:52:53 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.282 05:52:53 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:02.282 05:52:53 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:02.282 05:52:53 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:02.282 05:52:53 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:02.282 05:52:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:02.282 05:52:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:02.282 05:52:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:02.282 05:52:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:02.282 05:52:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:02.282 05:52:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:02.282 05:52:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:02.282 05:52:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:02.282 05:52:53 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:02.282 INFO: launching applications... 00:06:02.282 05:52:53 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:02.282 05:52:53 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:02.282 05:52:53 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:02.282 05:52:53 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:02.282 05:52:53 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:02.282 05:52:53 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:02.282 05:52:53 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:02.282 05:52:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:02.282 05:52:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:02.282 05:52:53 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=74587 00:06:02.282 Waiting for target to run... 00:06:02.282 05:52:53 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:02.282 05:52:53 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 74587 /var/tmp/spdk_tgt.sock 00:06:02.282 05:52:53 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 74587 ']' 00:06:02.282 05:52:53 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:02.282 05:52:53 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:02.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:02.282 05:52:53 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.282 05:52:53 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:02.282 05:52:53 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.282 05:52:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:02.282 [2024-07-13 05:52:53.912137] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:02.282 [2024-07-13 05:52:53.912325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74587 ] 00:06:02.540 [2024-07-13 05:52:54.203107] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.540 [2024-07-13 05:52:54.231791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.475 00:06:03.475 INFO: shutting down applications... 00:06:03.475 05:52:54 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.475 05:52:54 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:03.475 05:52:54 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:03.475 05:52:54 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:03.475 05:52:54 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:03.475 05:52:54 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:03.475 05:52:54 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:03.475 05:52:54 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 74587 ]] 00:06:03.475 05:52:54 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 74587 00:06:03.475 05:52:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:03.475 05:52:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:03.475 05:52:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74587 00:06:03.475 05:52:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:03.734 05:52:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:03.734 05:52:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:03.734 05:52:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 74587 00:06:03.734 05:52:55 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:03.734 05:52:55 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:03.734 SPDK target shutdown done 00:06:03.734 Success 00:06:03.734 05:52:55 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:03.734 05:52:55 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:03.734 05:52:55 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:03.734 00:06:03.734 real 0m1.609s 00:06:03.734 user 0m1.471s 00:06:03.734 sys 0m0.354s 00:06:03.734 ************************************ 00:06:03.734 END TEST json_config_extra_key 00:06:03.734 ************************************ 00:06:03.734 05:52:55 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.734 05:52:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:03.734 05:52:55 -- common/autotest_common.sh@1142 -- # return 0 00:06:03.734 05:52:55 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:03.734 05:52:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.734 05:52:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.734 05:52:55 -- common/autotest_common.sh@10 -- # set +x 00:06:03.735 ************************************ 00:06:03.735 START TEST alias_rpc 00:06:03.735 ************************************ 00:06:03.735 05:52:55 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:03.993 * Looking for test storage... 00:06:03.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:03.993 05:52:55 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:03.993 05:52:55 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=74652 00:06:03.993 05:52:55 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.993 05:52:55 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 74652 00:06:03.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.993 05:52:55 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 74652 ']' 00:06:03.994 05:52:55 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.994 05:52:55 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.994 05:52:55 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.994 05:52:55 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.994 05:52:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.994 [2024-07-13 05:52:55.594958] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:03.994 [2024-07-13 05:52:55.595212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74652 ] 00:06:04.252 [2024-07-13 05:52:55.740729] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.252 [2024-07-13 05:52:55.776949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.820 05:52:56 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.820 05:52:56 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:04.820 05:52:56 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:05.079 05:52:56 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 74652 00:06:05.079 05:52:56 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 74652 ']' 00:06:05.079 05:52:56 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 74652 00:06:05.079 05:52:56 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:05.079 05:52:56 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:05.079 05:52:56 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74652 00:06:05.079 killing process with pid 74652 00:06:05.079 05:52:56 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:05.079 05:52:56 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:05.079 05:52:56 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74652' 00:06:05.079 05:52:56 alias_rpc -- common/autotest_common.sh@967 -- # kill 74652 00:06:05.079 05:52:56 alias_rpc -- common/autotest_common.sh@972 -- # wait 74652 00:06:05.647 ************************************ 00:06:05.647 END TEST alias_rpc 00:06:05.647 ************************************ 00:06:05.647 00:06:05.647 real 0m1.670s 00:06:05.647 user 0m1.939s 00:06:05.647 sys 0m0.383s 00:06:05.647 05:52:57 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.647 05:52:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.647 05:52:57 -- common/autotest_common.sh@1142 -- # return 0 00:06:05.647 05:52:57 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:05.647 05:52:57 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:05.647 05:52:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.647 05:52:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.647 05:52:57 -- common/autotest_common.sh@10 -- # set +x 00:06:05.647 ************************************ 00:06:05.647 START TEST spdkcli_tcp 00:06:05.647 ************************************ 00:06:05.647 05:52:57 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:05.647 * Looking for test storage... 00:06:05.647 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:05.647 05:52:57 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:05.647 05:52:57 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:05.647 05:52:57 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:05.647 05:52:57 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:05.647 05:52:57 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:05.647 05:52:57 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:05.647 05:52:57 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:05.647 05:52:57 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:05.647 05:52:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:05.647 05:52:57 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=74724 00:06:05.647 05:52:57 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 74724 00:06:05.647 05:52:57 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:05.647 05:52:57 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 74724 ']' 00:06:05.647 05:52:57 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.647 05:52:57 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.647 05:52:57 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.647 05:52:57 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.647 05:52:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:05.647 [2024-07-13 05:52:57.291616] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:05.647 [2024-07-13 05:52:57.291775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74724 ] 00:06:05.905 [2024-07-13 05:52:57.431042] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.905 [2024-07-13 05:52:57.469946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.905 [2024-07-13 05:52:57.469983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.164 05:52:57 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.164 05:52:57 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:06.164 05:52:57 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=74733 00:06:06.164 05:52:57 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:06.164 05:52:57 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:06.423 [ 00:06:06.423 "bdev_malloc_delete", 00:06:06.423 "bdev_malloc_create", 00:06:06.423 "bdev_null_resize", 00:06:06.423 "bdev_null_delete", 00:06:06.423 "bdev_null_create", 00:06:06.423 "bdev_nvme_cuse_unregister", 00:06:06.423 "bdev_nvme_cuse_register", 00:06:06.423 "bdev_opal_new_user", 00:06:06.423 "bdev_opal_set_lock_state", 00:06:06.423 "bdev_opal_delete", 00:06:06.423 "bdev_opal_get_info", 00:06:06.423 "bdev_opal_create", 00:06:06.423 "bdev_nvme_opal_revert", 00:06:06.423 "bdev_nvme_opal_init", 00:06:06.423 "bdev_nvme_send_cmd", 00:06:06.423 "bdev_nvme_get_path_iostat", 00:06:06.423 "bdev_nvme_get_mdns_discovery_info", 00:06:06.423 "bdev_nvme_stop_mdns_discovery", 00:06:06.423 "bdev_nvme_start_mdns_discovery", 00:06:06.423 "bdev_nvme_set_multipath_policy", 00:06:06.423 "bdev_nvme_set_preferred_path", 00:06:06.423 "bdev_nvme_get_io_paths", 00:06:06.423 "bdev_nvme_remove_error_injection", 00:06:06.423 "bdev_nvme_add_error_injection", 00:06:06.423 "bdev_nvme_get_discovery_info", 00:06:06.423 "bdev_nvme_stop_discovery", 00:06:06.423 "bdev_nvme_start_discovery", 00:06:06.423 "bdev_nvme_get_controller_health_info", 00:06:06.423 "bdev_nvme_disable_controller", 00:06:06.423 "bdev_nvme_enable_controller", 00:06:06.423 "bdev_nvme_reset_controller", 00:06:06.423 "bdev_nvme_get_transport_statistics", 00:06:06.423 "bdev_nvme_apply_firmware", 00:06:06.423 "bdev_nvme_detach_controller", 00:06:06.423 "bdev_nvme_get_controllers", 00:06:06.423 "bdev_nvme_attach_controller", 00:06:06.423 "bdev_nvme_set_hotplug", 00:06:06.423 "bdev_nvme_set_options", 00:06:06.423 "bdev_passthru_delete", 00:06:06.423 "bdev_passthru_create", 00:06:06.423 "bdev_lvol_set_parent_bdev", 00:06:06.423 "bdev_lvol_set_parent", 00:06:06.423 "bdev_lvol_check_shallow_copy", 00:06:06.423 "bdev_lvol_start_shallow_copy", 00:06:06.423 "bdev_lvol_grow_lvstore", 00:06:06.423 "bdev_lvol_get_lvols", 00:06:06.423 "bdev_lvol_get_lvstores", 00:06:06.423 "bdev_lvol_delete", 00:06:06.423 "bdev_lvol_set_read_only", 00:06:06.423 "bdev_lvol_resize", 00:06:06.423 "bdev_lvol_decouple_parent", 00:06:06.423 "bdev_lvol_inflate", 00:06:06.423 "bdev_lvol_rename", 00:06:06.423 "bdev_lvol_clone_bdev", 00:06:06.423 "bdev_lvol_clone", 00:06:06.423 "bdev_lvol_snapshot", 00:06:06.423 "bdev_lvol_create", 00:06:06.423 "bdev_lvol_delete_lvstore", 00:06:06.423 "bdev_lvol_rename_lvstore", 00:06:06.423 "bdev_lvol_create_lvstore", 00:06:06.423 "bdev_raid_set_options", 00:06:06.423 "bdev_raid_remove_base_bdev", 00:06:06.423 "bdev_raid_add_base_bdev", 00:06:06.423 "bdev_raid_delete", 00:06:06.423 "bdev_raid_create", 00:06:06.423 "bdev_raid_get_bdevs", 00:06:06.423 "bdev_error_inject_error", 00:06:06.423 "bdev_error_delete", 00:06:06.423 "bdev_error_create", 00:06:06.423 "bdev_split_delete", 00:06:06.423 "bdev_split_create", 00:06:06.423 "bdev_delay_delete", 00:06:06.423 "bdev_delay_create", 00:06:06.423 "bdev_delay_update_latency", 00:06:06.423 "bdev_zone_block_delete", 00:06:06.423 "bdev_zone_block_create", 00:06:06.423 "blobfs_create", 00:06:06.423 "blobfs_detect", 00:06:06.423 "blobfs_set_cache_size", 00:06:06.423 "bdev_xnvme_delete", 00:06:06.423 "bdev_xnvme_create", 00:06:06.423 "bdev_aio_delete", 00:06:06.423 "bdev_aio_rescan", 00:06:06.423 "bdev_aio_create", 00:06:06.423 "bdev_ftl_set_property", 00:06:06.423 "bdev_ftl_get_properties", 00:06:06.423 "bdev_ftl_get_stats", 00:06:06.423 "bdev_ftl_unmap", 00:06:06.423 "bdev_ftl_unload", 00:06:06.423 "bdev_ftl_delete", 00:06:06.423 "bdev_ftl_load", 00:06:06.423 "bdev_ftl_create", 00:06:06.423 "bdev_virtio_attach_controller", 00:06:06.423 "bdev_virtio_scsi_get_devices", 00:06:06.423 "bdev_virtio_detach_controller", 00:06:06.423 "bdev_virtio_blk_set_hotplug", 00:06:06.423 "bdev_iscsi_delete", 00:06:06.423 "bdev_iscsi_create", 00:06:06.423 "bdev_iscsi_set_options", 00:06:06.423 "accel_error_inject_error", 00:06:06.423 "ioat_scan_accel_module", 00:06:06.423 "dsa_scan_accel_module", 00:06:06.423 "iaa_scan_accel_module", 00:06:06.423 "keyring_file_remove_key", 00:06:06.423 "keyring_file_add_key", 00:06:06.423 "keyring_linux_set_options", 00:06:06.423 "iscsi_get_histogram", 00:06:06.423 "iscsi_enable_histogram", 00:06:06.423 "iscsi_set_options", 00:06:06.423 "iscsi_get_auth_groups", 00:06:06.423 "iscsi_auth_group_remove_secret", 00:06:06.423 "iscsi_auth_group_add_secret", 00:06:06.423 "iscsi_delete_auth_group", 00:06:06.423 "iscsi_create_auth_group", 00:06:06.423 "iscsi_set_discovery_auth", 00:06:06.423 "iscsi_get_options", 00:06:06.423 "iscsi_target_node_request_logout", 00:06:06.423 "iscsi_target_node_set_redirect", 00:06:06.423 "iscsi_target_node_set_auth", 00:06:06.423 "iscsi_target_node_add_lun", 00:06:06.423 "iscsi_get_stats", 00:06:06.423 "iscsi_get_connections", 00:06:06.423 "iscsi_portal_group_set_auth", 00:06:06.423 "iscsi_start_portal_group", 00:06:06.423 "iscsi_delete_portal_group", 00:06:06.423 "iscsi_create_portal_group", 00:06:06.423 "iscsi_get_portal_groups", 00:06:06.423 "iscsi_delete_target_node", 00:06:06.423 "iscsi_target_node_remove_pg_ig_maps", 00:06:06.423 "iscsi_target_node_add_pg_ig_maps", 00:06:06.423 "iscsi_create_target_node", 00:06:06.423 "iscsi_get_target_nodes", 00:06:06.423 "iscsi_delete_initiator_group", 00:06:06.423 "iscsi_initiator_group_remove_initiators", 00:06:06.423 "iscsi_initiator_group_add_initiators", 00:06:06.423 "iscsi_create_initiator_group", 00:06:06.423 "iscsi_get_initiator_groups", 00:06:06.423 "nvmf_set_crdt", 00:06:06.423 "nvmf_set_config", 00:06:06.423 "nvmf_set_max_subsystems", 00:06:06.423 "nvmf_stop_mdns_prr", 00:06:06.423 "nvmf_publish_mdns_prr", 00:06:06.423 "nvmf_subsystem_get_listeners", 00:06:06.423 "nvmf_subsystem_get_qpairs", 00:06:06.423 "nvmf_subsystem_get_controllers", 00:06:06.423 "nvmf_get_stats", 00:06:06.423 "nvmf_get_transports", 00:06:06.423 "nvmf_create_transport", 00:06:06.423 "nvmf_get_targets", 00:06:06.423 "nvmf_delete_target", 00:06:06.423 "nvmf_create_target", 00:06:06.423 "nvmf_subsystem_allow_any_host", 00:06:06.423 "nvmf_subsystem_remove_host", 00:06:06.423 "nvmf_subsystem_add_host", 00:06:06.423 "nvmf_ns_remove_host", 00:06:06.423 "nvmf_ns_add_host", 00:06:06.423 "nvmf_subsystem_remove_ns", 00:06:06.423 "nvmf_subsystem_add_ns", 00:06:06.423 "nvmf_subsystem_listener_set_ana_state", 00:06:06.423 "nvmf_discovery_get_referrals", 00:06:06.424 "nvmf_discovery_remove_referral", 00:06:06.424 "nvmf_discovery_add_referral", 00:06:06.424 "nvmf_subsystem_remove_listener", 00:06:06.424 "nvmf_subsystem_add_listener", 00:06:06.424 "nvmf_delete_subsystem", 00:06:06.424 "nvmf_create_subsystem", 00:06:06.424 "nvmf_get_subsystems", 00:06:06.424 "env_dpdk_get_mem_stats", 00:06:06.424 "nbd_get_disks", 00:06:06.424 "nbd_stop_disk", 00:06:06.424 "nbd_start_disk", 00:06:06.424 "ublk_recover_disk", 00:06:06.424 "ublk_get_disks", 00:06:06.424 "ublk_stop_disk", 00:06:06.424 "ublk_start_disk", 00:06:06.424 "ublk_destroy_target", 00:06:06.424 "ublk_create_target", 00:06:06.424 "virtio_blk_create_transport", 00:06:06.424 "virtio_blk_get_transports", 00:06:06.424 "vhost_controller_set_coalescing", 00:06:06.424 "vhost_get_controllers", 00:06:06.424 "vhost_delete_controller", 00:06:06.424 "vhost_create_blk_controller", 00:06:06.424 "vhost_scsi_controller_remove_target", 00:06:06.424 "vhost_scsi_controller_add_target", 00:06:06.424 "vhost_start_scsi_controller", 00:06:06.424 "vhost_create_scsi_controller", 00:06:06.424 "thread_set_cpumask", 00:06:06.424 "framework_get_governor", 00:06:06.424 "framework_get_scheduler", 00:06:06.424 "framework_set_scheduler", 00:06:06.424 "framework_get_reactors", 00:06:06.424 "thread_get_io_channels", 00:06:06.424 "thread_get_pollers", 00:06:06.424 "thread_get_stats", 00:06:06.424 "framework_monitor_context_switch", 00:06:06.424 "spdk_kill_instance", 00:06:06.424 "log_enable_timestamps", 00:06:06.424 "log_get_flags", 00:06:06.424 "log_clear_flag", 00:06:06.424 "log_set_flag", 00:06:06.424 "log_get_level", 00:06:06.424 "log_set_level", 00:06:06.424 "log_get_print_level", 00:06:06.424 "log_set_print_level", 00:06:06.424 "framework_enable_cpumask_locks", 00:06:06.424 "framework_disable_cpumask_locks", 00:06:06.424 "framework_wait_init", 00:06:06.424 "framework_start_init", 00:06:06.424 "scsi_get_devices", 00:06:06.424 "bdev_get_histogram", 00:06:06.424 "bdev_enable_histogram", 00:06:06.424 "bdev_set_qos_limit", 00:06:06.424 "bdev_set_qd_sampling_period", 00:06:06.424 "bdev_get_bdevs", 00:06:06.424 "bdev_reset_iostat", 00:06:06.424 "bdev_get_iostat", 00:06:06.424 "bdev_examine", 00:06:06.424 "bdev_wait_for_examine", 00:06:06.424 "bdev_set_options", 00:06:06.424 "notify_get_notifications", 00:06:06.424 "notify_get_types", 00:06:06.424 "accel_get_stats", 00:06:06.424 "accel_set_options", 00:06:06.424 "accel_set_driver", 00:06:06.424 "accel_crypto_key_destroy", 00:06:06.424 "accel_crypto_keys_get", 00:06:06.424 "accel_crypto_key_create", 00:06:06.424 "accel_assign_opc", 00:06:06.424 "accel_get_module_info", 00:06:06.424 "accel_get_opc_assignments", 00:06:06.424 "vmd_rescan", 00:06:06.424 "vmd_remove_device", 00:06:06.424 "vmd_enable", 00:06:06.424 "sock_get_default_impl", 00:06:06.424 "sock_set_default_impl", 00:06:06.424 "sock_impl_set_options", 00:06:06.424 "sock_impl_get_options", 00:06:06.424 "iobuf_get_stats", 00:06:06.424 "iobuf_set_options", 00:06:06.424 "framework_get_pci_devices", 00:06:06.424 "framework_get_config", 00:06:06.424 "framework_get_subsystems", 00:06:06.424 "trace_get_info", 00:06:06.424 "trace_get_tpoint_group_mask", 00:06:06.424 "trace_disable_tpoint_group", 00:06:06.424 "trace_enable_tpoint_group", 00:06:06.424 "trace_clear_tpoint_mask", 00:06:06.424 "trace_set_tpoint_mask", 00:06:06.424 "keyring_get_keys", 00:06:06.424 "spdk_get_version", 00:06:06.424 "rpc_get_methods" 00:06:06.424 ] 00:06:06.424 05:52:57 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:06.424 05:52:57 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:06.424 05:52:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:06.424 05:52:57 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:06.424 05:52:57 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 74724 00:06:06.424 05:52:57 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 74724 ']' 00:06:06.424 05:52:57 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 74724 00:06:06.424 05:52:57 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:06.424 05:52:57 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:06.424 05:52:57 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74724 00:06:06.424 killing process with pid 74724 00:06:06.424 05:52:57 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:06.424 05:52:57 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:06.424 05:52:57 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74724' 00:06:06.424 05:52:57 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 74724 00:06:06.424 05:52:57 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 74724 00:06:06.683 ************************************ 00:06:06.683 END TEST spdkcli_tcp 00:06:06.683 ************************************ 00:06:06.683 00:06:06.683 real 0m1.158s 00:06:06.683 user 0m2.017s 00:06:06.683 sys 0m0.378s 00:06:06.683 05:52:58 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.683 05:52:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:06.683 05:52:58 -- common/autotest_common.sh@1142 -- # return 0 00:06:06.683 05:52:58 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:06.683 05:52:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.683 05:52:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.683 05:52:58 -- common/autotest_common.sh@10 -- # set +x 00:06:06.683 ************************************ 00:06:06.683 START TEST dpdk_mem_utility 00:06:06.683 ************************************ 00:06:06.683 05:52:58 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:06.942 * Looking for test storage... 00:06:06.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:06.942 05:52:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:06.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.942 05:52:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=74803 00:06:06.942 05:52:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 74803 00:06:06.942 05:52:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:06.942 05:52:58 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 74803 ']' 00:06:06.942 05:52:58 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.942 05:52:58 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.942 05:52:58 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.942 05:52:58 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.942 05:52:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:06.942 [2024-07-13 05:52:58.530056] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:06.942 [2024-07-13 05:52:58.530264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74803 ] 00:06:07.200 [2024-07-13 05:52:58.677766] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.200 [2024-07-13 05:52:58.719840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.768 05:52:59 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.768 05:52:59 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:07.768 05:52:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:07.768 05:52:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:07.768 05:52:59 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.768 05:52:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:07.768 { 00:06:07.768 "filename": "/tmp/spdk_mem_dump.txt" 00:06:07.768 } 00:06:07.768 05:52:59 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.768 05:52:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:07.768 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:07.768 1 heaps totaling size 814.000000 MiB 00:06:07.768 size: 814.000000 MiB heap id: 0 00:06:07.768 end heaps---------- 00:06:07.768 8 mempools totaling size 598.116089 MiB 00:06:07.768 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:07.768 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:07.768 size: 84.521057 MiB name: bdev_io_74803 00:06:07.768 size: 51.011292 MiB name: evtpool_74803 00:06:07.768 size: 50.003479 MiB name: msgpool_74803 00:06:07.768 size: 21.763794 MiB name: PDU_Pool 00:06:07.768 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:07.768 size: 0.026123 MiB name: Session_Pool 00:06:07.768 end mempools------- 00:06:07.768 6 memzones totaling size 4.142822 MiB 00:06:07.768 size: 1.000366 MiB name: RG_ring_0_74803 00:06:07.768 size: 1.000366 MiB name: RG_ring_1_74803 00:06:07.768 size: 1.000366 MiB name: RG_ring_4_74803 00:06:07.768 size: 1.000366 MiB name: RG_ring_5_74803 00:06:07.768 size: 0.125366 MiB name: RG_ring_2_74803 00:06:07.768 size: 0.015991 MiB name: RG_ring_3_74803 00:06:07.768 end memzones------- 00:06:07.768 05:52:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:08.030 heap id: 0 total size: 814.000000 MiB number of busy elements: 302 number of free elements: 15 00:06:08.030 list of free elements. size: 12.471558 MiB 00:06:08.030 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:08.030 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:08.030 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:08.030 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:08.030 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:08.030 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:08.030 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:08.030 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:08.030 element at address: 0x200000200000 with size: 0.833191 MiB 00:06:08.030 element at address: 0x20001aa00000 with size: 0.568420 MiB 00:06:08.030 element at address: 0x20000b200000 with size: 0.488892 MiB 00:06:08.030 element at address: 0x200000800000 with size: 0.486145 MiB 00:06:08.030 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:08.030 element at address: 0x200027e00000 with size: 0.396301 MiB 00:06:08.030 element at address: 0x200003a00000 with size: 0.347839 MiB 00:06:08.030 list of standard malloc elements. size: 199.265869 MiB 00:06:08.030 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:08.030 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:08.030 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:08.030 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:08.030 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:08.030 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:08.030 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:08.030 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:08.030 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:08.030 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:08.030 element at address: 0x20000087c740 with size: 0.000183 MiB 00:06:08.030 element at address: 0x20000087c800 with size: 0.000183 MiB 00:06:08.030 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:08.030 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:08.030 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:08.030 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:08.030 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:08.030 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:08.030 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a59180 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a59240 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a59300 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a59480 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a59540 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a59600 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a59780 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a59840 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a59900 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:08.030 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:08.030 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:06:08.030 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:06:08.030 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:06:08.030 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:08.030 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:08.030 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:08.030 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:08.030 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:08.030 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:08.030 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:08.030 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:08.030 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:06:08.030 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:08.031 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e65740 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e65800 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6c400 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:08.031 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:08.031 list of memzone associated elements. size: 602.262573 MiB 00:06:08.031 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:08.031 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:08.031 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:08.032 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:08.032 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:08.032 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_74803_0 00:06:08.032 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:08.032 associated memzone info: size: 48.002930 MiB name: MP_evtpool_74803_0 00:06:08.032 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:08.032 associated memzone info: size: 48.002930 MiB name: MP_msgpool_74803_0 00:06:08.032 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:08.032 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:08.032 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:08.032 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:08.032 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:08.032 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_74803 00:06:08.032 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:08.032 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_74803 00:06:08.032 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:08.032 associated memzone info: size: 1.007996 MiB name: MP_evtpool_74803 00:06:08.032 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:08.032 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:08.032 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:08.032 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:08.032 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:08.032 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:08.032 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:08.032 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:08.032 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:08.032 associated memzone info: size: 1.000366 MiB name: RG_ring_0_74803 00:06:08.032 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:08.032 associated memzone info: size: 1.000366 MiB name: RG_ring_1_74803 00:06:08.032 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:08.032 associated memzone info: size: 1.000366 MiB name: RG_ring_4_74803 00:06:08.032 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:08.032 associated memzone info: size: 1.000366 MiB name: RG_ring_5_74803 00:06:08.032 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:08.032 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_74803 00:06:08.032 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:08.032 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:08.032 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:08.032 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:08.032 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:08.032 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:08.032 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:08.032 associated memzone info: size: 0.125366 MiB name: RG_ring_2_74803 00:06:08.032 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:08.032 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:08.032 element at address: 0x200027e658c0 with size: 0.023743 MiB 00:06:08.032 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:08.032 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:08.032 associated memzone info: size: 0.015991 MiB name: RG_ring_3_74803 00:06:08.032 element at address: 0x200027e6ba00 with size: 0.002441 MiB 00:06:08.032 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:08.032 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:08.032 associated memzone info: size: 0.000183 MiB name: MP_msgpool_74803 00:06:08.032 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:08.032 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_74803 00:06:08.032 element at address: 0x200027e6c4c0 with size: 0.000305 MiB 00:06:08.032 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:08.032 05:52:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:08.032 05:52:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 74803 00:06:08.032 05:52:59 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 74803 ']' 00:06:08.032 05:52:59 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 74803 00:06:08.032 05:52:59 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:08.032 05:52:59 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:08.032 05:52:59 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74803 00:06:08.032 05:52:59 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:08.032 05:52:59 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:08.032 05:52:59 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74803' 00:06:08.032 killing process with pid 74803 00:06:08.032 05:52:59 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 74803 00:06:08.032 05:52:59 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 74803 00:06:08.291 00:06:08.291 real 0m1.516s 00:06:08.291 user 0m1.637s 00:06:08.291 sys 0m0.392s 00:06:08.291 ************************************ 00:06:08.291 END TEST dpdk_mem_utility 00:06:08.291 ************************************ 00:06:08.291 05:52:59 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.291 05:52:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:08.291 05:52:59 -- common/autotest_common.sh@1142 -- # return 0 00:06:08.291 05:52:59 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:08.291 05:52:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.291 05:52:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.291 05:52:59 -- common/autotest_common.sh@10 -- # set +x 00:06:08.291 ************************************ 00:06:08.291 START TEST event 00:06:08.291 ************************************ 00:06:08.291 05:52:59 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:08.291 * Looking for test storage... 00:06:08.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:08.291 05:52:59 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:08.291 05:52:59 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:08.291 05:52:59 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:08.291 05:52:59 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:08.291 05:52:59 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.291 05:52:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.291 ************************************ 00:06:08.291 START TEST event_perf 00:06:08.291 ************************************ 00:06:08.291 05:53:00 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:08.564 Running I/O for 1 seconds...[2024-07-13 05:53:00.039665] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:08.564 [2024-07-13 05:53:00.040027] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74881 ] 00:06:08.564 [2024-07-13 05:53:00.185648] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:08.564 [2024-07-13 05:53:00.220875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.564 [2024-07-13 05:53:00.221007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.564 Running I/O for 1 seconds...[2024-07-13 05:53:00.221038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.564 [2024-07-13 05:53:00.221112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.942 00:06:09.942 lcore 0: 201629 00:06:09.942 lcore 1: 201629 00:06:09.942 lcore 2: 201630 00:06:09.942 lcore 3: 201628 00:06:09.942 done. 00:06:09.942 00:06:09.942 real 0m1.294s 00:06:09.942 user 0m4.086s 00:06:09.942 sys 0m0.089s 00:06:09.942 05:53:01 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.942 05:53:01 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:09.942 ************************************ 00:06:09.942 END TEST event_perf 00:06:09.942 ************************************ 00:06:09.942 05:53:01 event -- common/autotest_common.sh@1142 -- # return 0 00:06:09.942 05:53:01 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:09.942 05:53:01 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:09.942 05:53:01 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.942 05:53:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.942 ************************************ 00:06:09.942 START TEST event_reactor 00:06:09.942 ************************************ 00:06:09.942 05:53:01 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:09.942 [2024-07-13 05:53:01.380990] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:09.942 [2024-07-13 05:53:01.381204] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74915 ] 00:06:09.942 [2024-07-13 05:53:01.528536] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.942 [2024-07-13 05:53:01.560835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.317 test_start 00:06:11.317 oneshot 00:06:11.317 tick 100 00:06:11.317 tick 100 00:06:11.317 tick 250 00:06:11.317 tick 100 00:06:11.317 tick 100 00:06:11.317 tick 100 00:06:11.317 tick 250 00:06:11.317 tick 500 00:06:11.317 tick 100 00:06:11.317 tick 100 00:06:11.317 tick 250 00:06:11.317 tick 100 00:06:11.317 tick 100 00:06:11.317 test_end 00:06:11.317 00:06:11.317 real 0m1.291s 00:06:11.317 user 0m1.107s 00:06:11.317 sys 0m0.077s 00:06:11.317 05:53:02 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.317 ************************************ 00:06:11.317 END TEST event_reactor 00:06:11.317 ************************************ 00:06:11.317 05:53:02 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:11.317 05:53:02 event -- common/autotest_common.sh@1142 -- # return 0 00:06:11.317 05:53:02 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:11.317 05:53:02 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:11.317 05:53:02 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.317 05:53:02 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.317 ************************************ 00:06:11.317 START TEST event_reactor_perf 00:06:11.317 ************************************ 00:06:11.317 05:53:02 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:11.317 [2024-07-13 05:53:02.724309] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:11.317 [2024-07-13 05:53:02.724501] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74946 ] 00:06:11.317 [2024-07-13 05:53:02.869480] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.317 [2024-07-13 05:53:02.907726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.251 test_start 00:06:12.251 test_end 00:06:12.251 Performance: 328325 events per second 00:06:12.509 ************************************ 00:06:12.509 END TEST event_reactor_perf 00:06:12.509 ************************************ 00:06:12.509 00:06:12.509 real 0m1.285s 00:06:12.509 user 0m1.105s 00:06:12.509 sys 0m0.072s 00:06:12.509 05:53:03 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.509 05:53:03 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:12.509 05:53:04 event -- common/autotest_common.sh@1142 -- # return 0 00:06:12.509 05:53:04 event -- event/event.sh@49 -- # uname -s 00:06:12.509 05:53:04 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:12.509 05:53:04 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:12.509 05:53:04 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.509 05:53:04 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.509 05:53:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.509 ************************************ 00:06:12.509 START TEST event_scheduler 00:06:12.509 ************************************ 00:06:12.509 05:53:04 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:12.509 * Looking for test storage... 00:06:12.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:12.509 05:53:04 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:12.509 05:53:04 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=75014 00:06:12.509 05:53:04 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.509 05:53:04 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:12.509 05:53:04 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 75014 00:06:12.509 05:53:04 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 75014 ']' 00:06:12.509 05:53:04 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.509 05:53:04 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.509 05:53:04 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.509 05:53:04 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.509 05:53:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.509 [2024-07-13 05:53:04.226427] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:12.509 [2024-07-13 05:53:04.226610] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75014 ] 00:06:12.766 [2024-07-13 05:53:04.381200] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:12.766 [2024-07-13 05:53:04.426036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.766 [2024-07-13 05:53:04.426181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.766 [2024-07-13 05:53:04.426334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.766 [2024-07-13 05:53:04.426566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.700 05:53:05 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.700 05:53:05 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:13.700 05:53:05 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:13.700 05:53:05 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.700 05:53:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:13.700 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:13.700 POWER: Cannot set governor of lcore 0 to userspace 00:06:13.700 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:13.700 POWER: Cannot set governor of lcore 0 to performance 00:06:13.700 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:13.700 POWER: Cannot set governor of lcore 0 to userspace 00:06:13.700 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:13.700 POWER: Unable to set Power Management Environment for lcore 0 00:06:13.700 [2024-07-13 05:53:05.100589] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:13.700 [2024-07-13 05:53:05.100625] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:13.700 [2024-07-13 05:53:05.100650] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:13.700 [2024-07-13 05:53:05.100674] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:13.700 [2024-07-13 05:53:05.100688] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:13.700 [2024-07-13 05:53:05.100700] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:13.700 05:53:05 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.700 05:53:05 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:13.700 05:53:05 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.700 05:53:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:13.700 [2024-07-13 05:53:05.149376] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:13.700 05:53:05 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.700 05:53:05 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:13.700 05:53:05 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.700 05:53:05 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.700 05:53:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:13.700 ************************************ 00:06:13.700 START TEST scheduler_create_thread 00:06:13.700 ************************************ 00:06:13.700 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:13.700 05:53:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:13.700 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.700 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.700 2 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.701 3 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.701 4 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.701 5 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.701 6 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.701 7 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.701 8 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.701 9 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.701 10 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.701 05:53:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.081 05:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.081 05:53:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:15.081 05:53:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:15.081 05:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.081 05:53:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.458 ************************************ 00:06:16.458 END TEST scheduler_create_thread 00:06:16.458 ************************************ 00:06:16.458 05:53:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.458 00:06:16.458 real 0m2.616s 00:06:16.458 user 0m0.015s 00:06:16.458 sys 0m0.005s 00:06:16.458 05:53:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.458 05:53:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.458 05:53:07 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:16.458 05:53:07 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:16.458 05:53:07 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 75014 00:06:16.458 05:53:07 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 75014 ']' 00:06:16.458 05:53:07 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 75014 00:06:16.458 05:53:07 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:16.458 05:53:07 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.458 05:53:07 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75014 00:06:16.458 killing process with pid 75014 00:06:16.458 05:53:07 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:16.458 05:53:07 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:16.458 05:53:07 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75014' 00:06:16.458 05:53:07 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 75014 00:06:16.458 05:53:07 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 75014 00:06:16.717 [2024-07-13 05:53:08.257941] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:16.976 00:06:16.976 real 0m4.412s 00:06:16.976 user 0m8.166s 00:06:16.976 sys 0m0.370s 00:06:16.976 05:53:08 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.976 05:53:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:16.976 ************************************ 00:06:16.976 END TEST event_scheduler 00:06:16.976 ************************************ 00:06:16.976 05:53:08 event -- common/autotest_common.sh@1142 -- # return 0 00:06:16.976 05:53:08 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:16.976 05:53:08 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:16.976 05:53:08 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.976 05:53:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.976 05:53:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.976 ************************************ 00:06:16.976 START TEST app_repeat 00:06:16.976 ************************************ 00:06:16.976 05:53:08 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:16.976 05:53:08 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.976 05:53:08 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.976 05:53:08 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:16.976 05:53:08 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.976 05:53:08 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:16.976 05:53:08 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:16.976 05:53:08 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:16.976 05:53:08 event.app_repeat -- event/event.sh@19 -- # repeat_pid=75111 00:06:16.976 05:53:08 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:16.976 Process app_repeat pid: 75111 00:06:16.976 05:53:08 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:16.976 05:53:08 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 75111' 00:06:16.976 05:53:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:16.976 spdk_app_start Round 0 00:06:16.976 05:53:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:16.976 05:53:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75111 /var/tmp/spdk-nbd.sock 00:06:16.976 05:53:08 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 75111 ']' 00:06:16.976 05:53:08 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:16.976 05:53:08 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:16.976 05:53:08 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:16.976 05:53:08 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.976 05:53:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:16.976 [2024-07-13 05:53:08.570100] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:16.976 [2024-07-13 05:53:08.570359] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75111 ] 00:06:17.235 [2024-07-13 05:53:08.716535] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.235 [2024-07-13 05:53:08.755221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.235 [2024-07-13 05:53:08.755298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.803 05:53:09 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.803 05:53:09 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:17.803 05:53:09 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.061 Malloc0 00:06:18.061 05:53:09 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.320 Malloc1 00:06:18.320 05:53:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.320 05:53:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.320 05:53:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.320 05:53:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:18.320 05:53:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.320 05:53:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:18.320 05:53:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.320 05:53:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.320 05:53:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.320 05:53:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:18.320 05:53:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.320 05:53:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:18.320 05:53:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:18.320 05:53:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:18.320 05:53:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.320 05:53:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:18.579 /dev/nbd0 00:06:18.579 05:53:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:18.579 05:53:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:18.579 05:53:10 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:18.579 05:53:10 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:18.579 05:53:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:18.579 05:53:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:18.579 05:53:10 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:18.579 05:53:10 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:18.579 05:53:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:18.579 05:53:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:18.579 05:53:10 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.838 1+0 records in 00:06:18.838 1+0 records out 00:06:18.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101053 s, 4.1 MB/s 00:06:18.838 05:53:10 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.838 05:53:10 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:18.838 05:53:10 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.838 05:53:10 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:18.838 05:53:10 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:18.838 05:53:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.838 05:53:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.838 05:53:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:18.838 /dev/nbd1 00:06:18.838 05:53:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:18.838 05:53:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:18.838 05:53:10 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:18.838 05:53:10 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:18.838 05:53:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:18.838 05:53:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:18.838 05:53:10 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:18.838 05:53:10 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:18.838 05:53:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:18.838 05:53:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:18.838 05:53:10 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.838 1+0 records in 00:06:18.838 1+0 records out 00:06:18.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276618 s, 14.8 MB/s 00:06:18.838 05:53:10 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.838 05:53:10 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:18.838 05:53:10 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.838 05:53:10 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:18.838 05:53:10 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:18.838 05:53:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.838 05:53:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.838 05:53:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.838 05:53:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.838 05:53:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.098 05:53:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.098 { 00:06:19.098 "nbd_device": "/dev/nbd0", 00:06:19.098 "bdev_name": "Malloc0" 00:06:19.098 }, 00:06:19.098 { 00:06:19.098 "nbd_device": "/dev/nbd1", 00:06:19.098 "bdev_name": "Malloc1" 00:06:19.098 } 00:06:19.098 ]' 00:06:19.098 05:53:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.098 05:53:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.098 { 00:06:19.098 "nbd_device": "/dev/nbd0", 00:06:19.098 "bdev_name": "Malloc0" 00:06:19.098 }, 00:06:19.098 { 00:06:19.098 "nbd_device": "/dev/nbd1", 00:06:19.098 "bdev_name": "Malloc1" 00:06:19.098 } 00:06:19.098 ]' 00:06:19.357 05:53:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.357 /dev/nbd1' 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.358 /dev/nbd1' 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:19.358 256+0 records in 00:06:19.358 256+0 records out 00:06:19.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00517279 s, 203 MB/s 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:19.358 256+0 records in 00:06:19.358 256+0 records out 00:06:19.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252752 s, 41.5 MB/s 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.358 256+0 records in 00:06:19.358 256+0 records out 00:06:19.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259063 s, 40.5 MB/s 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.358 05:53:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:19.618 05:53:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:19.618 05:53:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:19.618 05:53:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:19.618 05:53:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.618 05:53:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.618 05:53:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:19.618 05:53:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.618 05:53:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.618 05:53:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.618 05:53:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:19.877 05:53:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:19.877 05:53:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:19.877 05:53:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:19.877 05:53:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.877 05:53:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.877 05:53:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:19.877 05:53:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.877 05:53:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.877 05:53:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.877 05:53:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.877 05:53:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.135 05:53:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.135 05:53:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.135 05:53:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.135 05:53:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.135 05:53:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.135 05:53:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.135 05:53:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:20.135 05:53:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.135 05:53:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.135 05:53:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:20.135 05:53:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:20.135 05:53:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:20.135 05:53:11 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:20.394 05:53:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:20.653 [2024-07-13 05:53:12.217076] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.653 [2024-07-13 05:53:12.247790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.653 [2024-07-13 05:53:12.247799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.653 [2024-07-13 05:53:12.276028] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:20.653 [2024-07-13 05:53:12.276124] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:23.970 05:53:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:23.970 spdk_app_start Round 1 00:06:23.970 05:53:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:23.970 05:53:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75111 /var/tmp/spdk-nbd.sock 00:06:23.970 05:53:15 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 75111 ']' 00:06:23.970 05:53:15 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:23.970 05:53:15 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:23.971 05:53:15 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:23.971 05:53:15 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.971 05:53:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:23.971 05:53:15 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.971 05:53:15 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:23.971 05:53:15 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.971 Malloc0 00:06:23.971 05:53:15 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.229 Malloc1 00:06:24.229 05:53:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.229 05:53:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.229 05:53:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.229 05:53:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:24.229 05:53:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.229 05:53:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:24.229 05:53:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.229 05:53:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.229 05:53:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.229 05:53:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:24.229 05:53:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.229 05:53:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:24.229 05:53:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:24.229 05:53:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:24.229 05:53:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.229 05:53:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:24.487 /dev/nbd0 00:06:24.487 05:53:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:24.487 05:53:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:24.487 05:53:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:24.487 05:53:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:24.487 05:53:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:24.487 05:53:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:24.487 05:53:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:24.487 05:53:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:24.487 05:53:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:24.487 05:53:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:24.487 05:53:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.487 1+0 records in 00:06:24.487 1+0 records out 00:06:24.487 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000164327 s, 24.9 MB/s 00:06:24.487 05:53:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.487 05:53:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:24.487 05:53:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.487 05:53:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:24.487 05:53:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:24.487 05:53:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.487 05:53:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.487 05:53:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:25.054 /dev/nbd1 00:06:25.054 05:53:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:25.054 05:53:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:25.054 05:53:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:25.054 05:53:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:25.054 05:53:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:25.054 05:53:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:25.054 05:53:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:25.054 05:53:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:25.054 05:53:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:25.054 05:53:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:25.054 05:53:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.054 1+0 records in 00:06:25.054 1+0 records out 00:06:25.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346222 s, 11.8 MB/s 00:06:25.054 05:53:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:25.054 05:53:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:25.054 05:53:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:25.054 05:53:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:25.054 05:53:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:25.054 05:53:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.054 05:53:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.054 05:53:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.054 05:53:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.054 05:53:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.054 05:53:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:25.054 { 00:06:25.054 "nbd_device": "/dev/nbd0", 00:06:25.054 "bdev_name": "Malloc0" 00:06:25.054 }, 00:06:25.054 { 00:06:25.054 "nbd_device": "/dev/nbd1", 00:06:25.054 "bdev_name": "Malloc1" 00:06:25.054 } 00:06:25.054 ]' 00:06:25.054 05:53:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.054 05:53:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:25.054 { 00:06:25.054 "nbd_device": "/dev/nbd0", 00:06:25.054 "bdev_name": "Malloc0" 00:06:25.054 }, 00:06:25.054 { 00:06:25.054 "nbd_device": "/dev/nbd1", 00:06:25.054 "bdev_name": "Malloc1" 00:06:25.054 } 00:06:25.054 ]' 00:06:25.054 05:53:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:25.054 /dev/nbd1' 00:06:25.054 05:53:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:25.054 /dev/nbd1' 00:06:25.054 05:53:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:25.312 256+0 records in 00:06:25.312 256+0 records out 00:06:25.312 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00569868 s, 184 MB/s 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:25.312 256+0 records in 00:06:25.312 256+0 records out 00:06:25.312 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249491 s, 42.0 MB/s 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:25.312 256+0 records in 00:06:25.312 256+0 records out 00:06:25.312 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270742 s, 38.7 MB/s 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.312 05:53:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:25.570 05:53:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:25.570 05:53:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:25.570 05:53:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:25.570 05:53:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.570 05:53:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.570 05:53:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:25.570 05:53:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.570 05:53:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.570 05:53:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.571 05:53:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:25.829 05:53:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:25.829 05:53:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:25.829 05:53:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:25.829 05:53:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.829 05:53:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.829 05:53:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:25.829 05:53:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.829 05:53:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.829 05:53:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.829 05:53:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.829 05:53:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.087 05:53:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:26.087 05:53:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.087 05:53:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:26.087 05:53:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:26.087 05:53:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.087 05:53:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:26.087 05:53:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:26.087 05:53:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:26.087 05:53:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:26.087 05:53:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:26.087 05:53:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:26.087 05:53:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:26.087 05:53:17 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:26.345 05:53:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:26.345 [2024-07-13 05:53:17.984330] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.345 [2024-07-13 05:53:18.022648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.345 [2024-07-13 05:53:18.022655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.345 [2024-07-13 05:53:18.052498] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:26.345 [2024-07-13 05:53:18.052609] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:29.626 05:53:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:29.626 spdk_app_start Round 2 00:06:29.626 05:53:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:29.626 05:53:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75111 /var/tmp/spdk-nbd.sock 00:06:29.626 05:53:20 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 75111 ']' 00:06:29.626 05:53:20 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:29.626 05:53:20 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:29.626 05:53:20 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:29.626 05:53:20 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.626 05:53:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.626 05:53:21 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.626 05:53:21 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:29.626 05:53:21 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.884 Malloc0 00:06:29.884 05:53:21 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.884 Malloc1 00:06:30.143 05:53:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.143 05:53:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.143 05:53:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.143 05:53:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:30.143 05:53:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.143 05:53:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:30.143 05:53:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.143 05:53:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.143 05:53:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.143 05:53:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:30.143 05:53:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.143 05:53:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:30.143 05:53:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:30.143 05:53:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:30.143 05:53:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.143 05:53:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:30.143 /dev/nbd0 00:06:30.143 05:53:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:30.143 05:53:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:30.143 05:53:21 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:30.143 05:53:21 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:30.143 05:53:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:30.143 05:53:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:30.143 05:53:21 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:30.143 05:53:21 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:30.143 05:53:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:30.143 05:53:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:30.143 05:53:21 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.143 1+0 records in 00:06:30.143 1+0 records out 00:06:30.143 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304085 s, 13.5 MB/s 00:06:30.143 05:53:21 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.143 05:53:21 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:30.143 05:53:21 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.143 05:53:21 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:30.143 05:53:21 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:30.143 05:53:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.143 05:53:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.143 05:53:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:30.401 /dev/nbd1 00:06:30.401 05:53:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:30.401 05:53:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:30.401 05:53:22 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:30.401 05:53:22 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:30.401 05:53:22 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:30.401 05:53:22 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:30.401 05:53:22 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:30.401 05:53:22 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:30.401 05:53:22 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:30.401 05:53:22 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:30.401 05:53:22 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.401 1+0 records in 00:06:30.401 1+0 records out 00:06:30.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197272 s, 20.8 MB/s 00:06:30.401 05:53:22 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.401 05:53:22 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:30.402 05:53:22 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:30.402 05:53:22 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:30.402 05:53:22 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:30.402 05:53:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.402 05:53:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.402 05:53:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.402 05:53:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.402 05:53:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.661 05:53:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:30.661 { 00:06:30.661 "nbd_device": "/dev/nbd0", 00:06:30.661 "bdev_name": "Malloc0" 00:06:30.661 }, 00:06:30.661 { 00:06:30.661 "nbd_device": "/dev/nbd1", 00:06:30.661 "bdev_name": "Malloc1" 00:06:30.661 } 00:06:30.661 ]' 00:06:30.661 05:53:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:30.661 { 00:06:30.661 "nbd_device": "/dev/nbd0", 00:06:30.661 "bdev_name": "Malloc0" 00:06:30.661 }, 00:06:30.661 { 00:06:30.661 "nbd_device": "/dev/nbd1", 00:06:30.661 "bdev_name": "Malloc1" 00:06:30.661 } 00:06:30.661 ]' 00:06:30.661 05:53:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.661 05:53:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:30.661 /dev/nbd1' 00:06:30.661 05:53:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:30.661 /dev/nbd1' 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:30.920 256+0 records in 00:06:30.920 256+0 records out 00:06:30.920 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00618911 s, 169 MB/s 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:30.920 256+0 records in 00:06:30.920 256+0 records out 00:06:30.920 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273723 s, 38.3 MB/s 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:30.920 256+0 records in 00:06:30.920 256+0 records out 00:06:30.920 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0349716 s, 30.0 MB/s 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.920 05:53:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:31.179 05:53:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:31.179 05:53:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:31.179 05:53:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:31.179 05:53:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.179 05:53:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.179 05:53:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:31.179 05:53:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.179 05:53:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.179 05:53:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.179 05:53:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:31.438 05:53:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:31.438 05:53:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:31.438 05:53:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:31.438 05:53:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.438 05:53:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.438 05:53:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:31.438 05:53:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.438 05:53:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.438 05:53:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.438 05:53:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.438 05:53:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.696 05:53:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:31.696 05:53:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:31.696 05:53:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.696 05:53:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:31.696 05:53:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:31.696 05:53:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.696 05:53:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:31.696 05:53:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:31.696 05:53:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:31.696 05:53:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:31.696 05:53:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:31.696 05:53:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:31.696 05:53:23 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:31.955 05:53:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:31.955 [2024-07-13 05:53:23.661244] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.214 [2024-07-13 05:53:23.694575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.214 [2024-07-13 05:53:23.694579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.214 [2024-07-13 05:53:23.724752] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:32.214 [2024-07-13 05:53:23.724890] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:35.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:35.501 05:53:26 event.app_repeat -- event/event.sh@38 -- # waitforlisten 75111 /var/tmp/spdk-nbd.sock 00:06:35.501 05:53:26 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 75111 ']' 00:06:35.501 05:53:26 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:35.501 05:53:26 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.501 05:53:26 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:35.501 05:53:26 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.501 05:53:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.501 05:53:26 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.501 05:53:26 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:35.501 05:53:26 event.app_repeat -- event/event.sh@39 -- # killprocess 75111 00:06:35.501 05:53:26 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 75111 ']' 00:06:35.501 05:53:26 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 75111 00:06:35.501 05:53:26 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:35.501 05:53:26 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:35.501 05:53:26 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75111 00:06:35.501 killing process with pid 75111 00:06:35.501 05:53:26 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:35.501 05:53:26 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:35.501 05:53:26 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75111' 00:06:35.501 05:53:26 event.app_repeat -- common/autotest_common.sh@967 -- # kill 75111 00:06:35.501 05:53:26 event.app_repeat -- common/autotest_common.sh@972 -- # wait 75111 00:06:35.501 spdk_app_start is called in Round 0. 00:06:35.501 Shutdown signal received, stop current app iteration 00:06:35.501 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 reinitialization... 00:06:35.501 spdk_app_start is called in Round 1. 00:06:35.501 Shutdown signal received, stop current app iteration 00:06:35.501 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 reinitialization... 00:06:35.501 spdk_app_start is called in Round 2. 00:06:35.501 Shutdown signal received, stop current app iteration 00:06:35.501 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 reinitialization... 00:06:35.501 spdk_app_start is called in Round 3. 00:06:35.501 Shutdown signal received, stop current app iteration 00:06:35.501 ************************************ 00:06:35.501 END TEST app_repeat 00:06:35.501 ************************************ 00:06:35.501 05:53:26 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:35.501 05:53:26 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:35.501 00:06:35.501 real 0m18.477s 00:06:35.501 user 0m41.800s 00:06:35.501 sys 0m2.521s 00:06:35.501 05:53:26 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.501 05:53:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.501 05:53:27 event -- common/autotest_common.sh@1142 -- # return 0 00:06:35.501 05:53:27 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:35.501 05:53:27 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:35.501 05:53:27 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:35.501 05:53:27 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.501 05:53:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.501 ************************************ 00:06:35.501 START TEST cpu_locks 00:06:35.501 ************************************ 00:06:35.501 05:53:27 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:35.501 * Looking for test storage... 00:06:35.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:35.501 05:53:27 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:35.501 05:53:27 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:35.501 05:53:27 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:35.501 05:53:27 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:35.501 05:53:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:35.501 05:53:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.501 05:53:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.501 ************************************ 00:06:35.501 START TEST default_locks 00:06:35.501 ************************************ 00:06:35.501 05:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:35.501 05:53:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=75543 00:06:35.501 05:53:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 75543 00:06:35.501 05:53:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.501 05:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 75543 ']' 00:06:35.501 05:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.501 05:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.501 05:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.501 05:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.501 05:53:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.760 [2024-07-13 05:53:27.231262] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:35.760 [2024-07-13 05:53:27.231426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75543 ] 00:06:35.760 [2024-07-13 05:53:27.373708] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.760 [2024-07-13 05:53:27.409114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.696 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.696 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:36.696 05:53:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 75543 00:06:36.696 05:53:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 75543 00:06:36.696 05:53:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.955 05:53:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 75543 00:06:36.955 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 75543 ']' 00:06:36.955 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 75543 00:06:36.955 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:36.955 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.955 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75543 00:06:36.955 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:36.955 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:36.955 killing process with pid 75543 00:06:36.955 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75543' 00:06:36.955 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 75543 00:06:36.955 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 75543 00:06:37.214 05:53:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 75543 00:06:37.214 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:37.214 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75543 00:06:37.214 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:37.214 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.214 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:37.214 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.214 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 75543 00:06:37.214 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 75543 ']' 00:06:37.214 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.215 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.215 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.215 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.215 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.215 ERROR: process (pid: 75543) is no longer running 00:06:37.215 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (75543) - No such process 00:06:37.215 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.215 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:37.215 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:37.215 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:37.215 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:37.215 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:37.215 05:53:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:37.215 05:53:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:37.215 05:53:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:37.215 05:53:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:37.215 00:06:37.215 real 0m1.638s 00:06:37.215 user 0m1.810s 00:06:37.215 sys 0m0.470s 00:06:37.215 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.215 05:53:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.215 ************************************ 00:06:37.215 END TEST default_locks 00:06:37.215 ************************************ 00:06:37.215 05:53:28 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:37.215 05:53:28 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:37.215 05:53:28 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:37.215 05:53:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.215 05:53:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.215 ************************************ 00:06:37.215 START TEST default_locks_via_rpc 00:06:37.215 ************************************ 00:06:37.215 05:53:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:37.215 05:53:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=75592 00:06:37.215 05:53:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.215 05:53:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 75592 00:06:37.215 05:53:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 75592 ']' 00:06:37.215 05:53:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.215 05:53:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.215 05:53:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.215 05:53:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.215 05:53:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.215 [2024-07-13 05:53:28.920183] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:37.215 [2024-07-13 05:53:28.920370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75592 ] 00:06:37.474 [2024-07-13 05:53:29.063273] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.474 [2024-07-13 05:53:29.099788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.412 05:53:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.412 05:53:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:38.412 05:53:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:38.412 05:53:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.412 05:53:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.412 05:53:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.412 05:53:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:38.412 05:53:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:38.412 05:53:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:38.412 05:53:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:38.412 05:53:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:38.412 05:53:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.412 05:53:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.412 05:53:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.412 05:53:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 75592 00:06:38.412 05:53:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 75592 00:06:38.412 05:53:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.671 05:53:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 75592 00:06:38.671 05:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 75592 ']' 00:06:38.671 05:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 75592 00:06:38.671 05:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:38.671 05:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:38.671 05:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75592 00:06:38.671 05:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:38.671 05:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:38.671 killing process with pid 75592 00:06:38.671 05:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75592' 00:06:38.671 05:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 75592 00:06:38.671 05:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 75592 00:06:38.930 00:06:38.930 real 0m1.760s 00:06:38.930 user 0m1.980s 00:06:38.930 sys 0m0.490s 00:06:38.930 05:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.930 05:53:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.930 ************************************ 00:06:38.930 END TEST default_locks_via_rpc 00:06:38.930 ************************************ 00:06:38.930 05:53:30 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:38.930 05:53:30 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:38.930 05:53:30 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.930 05:53:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.930 05:53:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.930 ************************************ 00:06:38.930 START TEST non_locking_app_on_locked_coremask 00:06:38.930 ************************************ 00:06:38.930 05:53:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:38.930 05:53:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=75643 00:06:38.930 05:53:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.930 05:53:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 75643 /var/tmp/spdk.sock 00:06:38.930 05:53:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 75643 ']' 00:06:38.930 05:53:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.930 05:53:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.930 05:53:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.930 05:53:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.930 05:53:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.238 [2024-07-13 05:53:30.731603] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:39.238 [2024-07-13 05:53:30.731779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75643 ] 00:06:39.238 [2024-07-13 05:53:30.872941] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.238 [2024-07-13 05:53:30.906705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.194 05:53:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.194 05:53:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:40.194 05:53:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=75659 00:06:40.194 05:53:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:40.194 05:53:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 75659 /var/tmp/spdk2.sock 00:06:40.194 05:53:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 75659 ']' 00:06:40.194 05:53:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.194 05:53:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.194 05:53:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.194 05:53:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.194 05:53:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.194 [2024-07-13 05:53:31.728234] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:40.194 [2024-07-13 05:53:31.728424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75659 ] 00:06:40.194 [2024-07-13 05:53:31.880149] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.194 [2024-07-13 05:53:31.880220] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.453 [2024-07-13 05:53:31.949240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.019 05:53:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.019 05:53:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:41.019 05:53:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 75643 00:06:41.019 05:53:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.019 05:53:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75643 00:06:41.585 05:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 75643 00:06:41.585 05:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 75643 ']' 00:06:41.585 05:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 75643 00:06:41.585 05:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:41.585 05:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:41.585 05:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75643 00:06:41.843 05:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:41.843 killing process with pid 75643 00:06:41.843 05:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:41.843 05:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75643' 00:06:41.843 05:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 75643 00:06:41.843 05:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 75643 00:06:42.410 05:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 75659 00:06:42.410 05:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 75659 ']' 00:06:42.410 05:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 75659 00:06:42.410 05:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:42.410 05:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.410 05:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75659 00:06:42.410 05:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:42.410 05:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:42.410 killing process with pid 75659 00:06:42.410 05:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75659' 00:06:42.410 05:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 75659 00:06:42.410 05:53:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 75659 00:06:42.669 00:06:42.669 real 0m3.537s 00:06:42.669 user 0m4.055s 00:06:42.669 sys 0m0.919s 00:06:42.669 05:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.669 05:53:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.669 ************************************ 00:06:42.669 END TEST non_locking_app_on_locked_coremask 00:06:42.669 ************************************ 00:06:42.669 05:53:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:42.669 05:53:34 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:42.669 05:53:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.669 05:53:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.669 05:53:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.669 ************************************ 00:06:42.669 START TEST locking_app_on_unlocked_coremask 00:06:42.669 ************************************ 00:06:42.669 05:53:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:42.669 05:53:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=75722 00:06:42.669 05:53:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 75722 /var/tmp/spdk.sock 00:06:42.669 05:53:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:42.669 05:53:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 75722 ']' 00:06:42.669 05:53:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.669 05:53:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.669 05:53:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.669 05:53:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.669 05:53:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.669 [2024-07-13 05:53:34.349626] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:42.669 [2024-07-13 05:53:34.349815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75722 ] 00:06:42.928 [2024-07-13 05:53:34.498085] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.928 [2024-07-13 05:53:34.498189] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.928 [2024-07-13 05:53:34.537128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.867 05:53:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.867 05:53:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:43.867 05:53:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=75738 00:06:43.867 05:53:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 75738 /var/tmp/spdk2.sock 00:06:43.867 05:53:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 75738 ']' 00:06:43.867 05:53:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.867 05:53:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.867 05:53:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.867 05:53:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.867 05:53:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:43.867 05:53:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.867 [2024-07-13 05:53:35.394986] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:43.867 [2024-07-13 05:53:35.395210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75738 ] 00:06:43.868 [2024-07-13 05:53:35.548183] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.127 [2024-07-13 05:53:35.622364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.695 05:53:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.695 05:53:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:44.695 05:53:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 75738 00:06:44.695 05:53:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75738 00:06:44.695 05:53:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.632 05:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 75722 00:06:45.632 05:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 75722 ']' 00:06:45.632 05:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 75722 00:06:45.632 05:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:45.632 05:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:45.632 05:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75722 00:06:45.632 05:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:45.632 05:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:45.632 killing process with pid 75722 00:06:45.632 05:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75722' 00:06:45.632 05:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 75722 00:06:45.632 05:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 75722 00:06:45.891 05:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 75738 00:06:45.891 05:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 75738 ']' 00:06:45.891 05:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 75738 00:06:45.891 05:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:45.891 05:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:45.891 05:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75738 00:06:46.150 05:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:46.150 killing process with pid 75738 00:06:46.150 05:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:46.150 05:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75738' 00:06:46.150 05:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 75738 00:06:46.150 05:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 75738 00:06:46.409 00:06:46.409 real 0m3.658s 00:06:46.409 user 0m4.235s 00:06:46.409 sys 0m0.994s 00:06:46.409 05:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.409 05:53:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.409 ************************************ 00:06:46.409 END TEST locking_app_on_unlocked_coremask 00:06:46.409 ************************************ 00:06:46.409 05:53:37 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:46.409 05:53:37 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:46.409 05:53:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:46.409 05:53:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.409 05:53:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.409 ************************************ 00:06:46.409 START TEST locking_app_on_locked_coremask 00:06:46.409 ************************************ 00:06:46.409 05:53:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:46.409 05:53:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=75802 00:06:46.409 05:53:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:46.409 05:53:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 75802 /var/tmp/spdk.sock 00:06:46.409 05:53:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 75802 ']' 00:06:46.409 05:53:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.409 05:53:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.409 05:53:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.409 05:53:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.409 05:53:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.409 [2024-07-13 05:53:38.062817] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:46.409 [2024-07-13 05:53:38.063014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75802 ] 00:06:46.669 [2024-07-13 05:53:38.210525] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.669 [2024-07-13 05:53:38.244279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.238 05:53:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.238 05:53:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:47.238 05:53:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=75818 00:06:47.238 05:53:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:47.238 05:53:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 75818 /var/tmp/spdk2.sock 00:06:47.238 05:53:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:47.238 05:53:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75818 /var/tmp/spdk2.sock 00:06:47.238 05:53:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:47.238 05:53:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.238 05:53:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:47.238 05:53:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.238 05:53:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 75818 /var/tmp/spdk2.sock 00:06:47.238 05:53:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 75818 ']' 00:06:47.238 05:53:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.238 05:53:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.238 05:53:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.238 05:53:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.238 05:53:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.497 [2024-07-13 05:53:39.039370] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:47.497 [2024-07-13 05:53:39.039548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75818 ] 00:06:47.497 [2024-07-13 05:53:39.197603] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 75802 has claimed it. 00:06:47.497 [2024-07-13 05:53:39.197719] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:48.065 ERROR: process (pid: 75818) is no longer running 00:06:48.065 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (75818) - No such process 00:06:48.065 05:53:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.065 05:53:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:48.065 05:53:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:48.065 05:53:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.065 05:53:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:48.065 05:53:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.065 05:53:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 75802 00:06:48.065 05:53:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75802 00:06:48.065 05:53:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.632 05:53:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 75802 00:06:48.632 05:53:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 75802 ']' 00:06:48.632 05:53:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 75802 00:06:48.632 05:53:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:48.632 05:53:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.632 05:53:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75802 00:06:48.632 05:53:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:48.632 05:53:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:48.632 killing process with pid 75802 00:06:48.632 05:53:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75802' 00:06:48.632 05:53:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 75802 00:06:48.632 05:53:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 75802 00:06:48.892 00:06:48.892 real 0m2.527s 00:06:48.892 user 0m2.966s 00:06:48.892 sys 0m0.654s 00:06:48.892 05:53:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.892 05:53:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.892 ************************************ 00:06:48.892 END TEST locking_app_on_locked_coremask 00:06:48.892 ************************************ 00:06:48.892 05:53:40 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:48.892 05:53:40 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:48.892 05:53:40 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.892 05:53:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.892 05:53:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.892 ************************************ 00:06:48.892 START TEST locking_overlapped_coremask 00:06:48.892 ************************************ 00:06:48.892 05:53:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:48.892 05:53:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=75865 00:06:48.892 05:53:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:48.892 05:53:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 75865 /var/tmp/spdk.sock 00:06:48.892 05:53:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 75865 ']' 00:06:48.892 05:53:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.892 05:53:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.892 05:53:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.892 05:53:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.892 05:53:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.151 [2024-07-13 05:53:40.642239] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:49.151 [2024-07-13 05:53:40.642418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75865 ] 00:06:49.151 [2024-07-13 05:53:40.788326] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.151 [2024-07-13 05:53:40.823220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.151 [2024-07-13 05:53:40.823272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.151 [2024-07-13 05:53:40.823340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.088 05:53:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.088 05:53:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:50.088 05:53:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=75883 00:06:50.088 05:53:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:50.088 05:53:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 75883 /var/tmp/spdk2.sock 00:06:50.088 05:53:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:50.088 05:53:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75883 /var/tmp/spdk2.sock 00:06:50.088 05:53:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:50.088 05:53:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.088 05:53:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:50.088 05:53:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.088 05:53:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 75883 /var/tmp/spdk2.sock 00:06:50.088 05:53:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 75883 ']' 00:06:50.088 05:53:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.088 05:53:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.088 05:53:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.088 05:53:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.088 05:53:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.088 [2024-07-13 05:53:41.616564] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:50.088 [2024-07-13 05:53:41.616736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75883 ] 00:06:50.088 [2024-07-13 05:53:41.771692] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 75865 has claimed it. 00:06:50.088 [2024-07-13 05:53:41.771781] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:50.656 ERROR: process (pid: 75883) is no longer running 00:06:50.656 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (75883) - No such process 00:06:50.656 05:53:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.656 05:53:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:50.656 05:53:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:50.656 05:53:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:50.656 05:53:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:50.656 05:53:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:50.656 05:53:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:50.656 05:53:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:50.656 05:53:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:50.656 05:53:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:50.656 05:53:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 75865 00:06:50.656 05:53:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 75865 ']' 00:06:50.656 05:53:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 75865 00:06:50.656 05:53:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:50.656 05:53:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:50.656 05:53:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75865 00:06:50.656 05:53:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:50.656 05:53:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:50.656 killing process with pid 75865 00:06:50.656 05:53:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75865' 00:06:50.656 05:53:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 75865 00:06:50.656 05:53:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 75865 00:06:50.915 00:06:50.915 real 0m2.087s 00:06:50.915 user 0m5.790s 00:06:50.915 sys 0m0.436s 00:06:50.915 05:53:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.915 ************************************ 00:06:50.915 END TEST locking_overlapped_coremask 00:06:50.915 ************************************ 00:06:50.915 05:53:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.174 05:53:42 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:51.174 05:53:42 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:51.174 05:53:42 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.174 05:53:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.174 05:53:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.174 ************************************ 00:06:51.174 START TEST locking_overlapped_coremask_via_rpc 00:06:51.174 ************************************ 00:06:51.174 05:53:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:51.174 05:53:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=75930 00:06:51.174 05:53:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 75930 /var/tmp/spdk.sock 00:06:51.174 05:53:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 75930 ']' 00:06:51.174 05:53:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:51.174 05:53:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.174 05:53:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.174 05:53:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.174 05:53:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.174 05:53:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.174 [2024-07-13 05:53:42.759859] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:51.174 [2024-07-13 05:53:42.760046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75930 ] 00:06:51.433 [2024-07-13 05:53:42.902297] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.433 [2024-07-13 05:53:42.902391] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.433 [2024-07-13 05:53:42.937699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.433 [2024-07-13 05:53:42.937733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.433 [2024-07-13 05:53:42.937752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.001 05:53:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.001 05:53:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:52.001 05:53:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=75943 00:06:52.001 05:53:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 75943 /var/tmp/spdk2.sock 00:06:52.001 05:53:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 75943 ']' 00:06:52.001 05:53:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.001 05:53:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.001 05:53:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.001 05:53:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.001 05:53:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:52.001 05:53:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.259 [2024-07-13 05:53:43.740088] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:52.259 [2024-07-13 05:53:43.740298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75943 ] 00:06:52.259 [2024-07-13 05:53:43.896615] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:52.259 [2024-07-13 05:53:43.896685] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.259 [2024-07-13 05:53:43.972388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.259 [2024-07-13 05:53:43.972398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.259 [2024-07-13 05:53:43.972447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.192 [2024-07-13 05:53:44.611309] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 75930 has claimed it. 00:06:53.192 request: 00:06:53.192 { 00:06:53.192 "method": "framework_enable_cpumask_locks", 00:06:53.192 "req_id": 1 00:06:53.192 } 00:06:53.192 Got JSON-RPC error response 00:06:53.192 response: 00:06:53.192 { 00:06:53.192 "code": -32603, 00:06:53.192 "message": "Failed to claim CPU core: 2" 00:06:53.192 } 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 75930 /var/tmp/spdk.sock 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 75930 ']' 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 75943 /var/tmp/spdk2.sock 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 75943 ']' 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.192 05:53:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.778 ************************************ 00:06:53.778 END TEST locking_overlapped_coremask_via_rpc 00:06:53.778 ************************************ 00:06:53.778 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.778 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:53.778 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:53.778 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:53.778 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:53.778 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:53.778 00:06:53.778 real 0m2.525s 00:06:53.778 user 0m1.291s 00:06:53.778 sys 0m0.161s 00:06:53.778 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.778 05:53:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.778 05:53:45 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:53.778 05:53:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:53.778 05:53:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 75930 ]] 00:06:53.778 05:53:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 75930 00:06:53.778 05:53:45 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 75930 ']' 00:06:53.778 05:53:45 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 75930 00:06:53.778 05:53:45 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:53.778 05:53:45 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:53.778 05:53:45 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75930 00:06:53.778 killing process with pid 75930 00:06:53.778 05:53:45 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:53.778 05:53:45 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:53.778 05:53:45 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75930' 00:06:53.778 05:53:45 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 75930 00:06:53.778 05:53:45 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 75930 00:06:54.041 05:53:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 75943 ]] 00:06:54.041 05:53:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 75943 00:06:54.041 05:53:45 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 75943 ']' 00:06:54.041 05:53:45 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 75943 00:06:54.041 05:53:45 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:54.041 05:53:45 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:54.041 05:53:45 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75943 00:06:54.041 killing process with pid 75943 00:06:54.041 05:53:45 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:54.041 05:53:45 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:54.041 05:53:45 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75943' 00:06:54.041 05:53:45 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 75943 00:06:54.041 05:53:45 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 75943 00:06:54.300 05:53:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:54.300 Process with pid 75930 is not found 00:06:54.300 Process with pid 75943 is not found 00:06:54.300 05:53:45 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:54.300 05:53:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 75930 ]] 00:06:54.300 05:53:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 75930 00:06:54.300 05:53:45 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 75930 ']' 00:06:54.300 05:53:45 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 75930 00:06:54.300 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (75930) - No such process 00:06:54.300 05:53:45 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 75930 is not found' 00:06:54.300 05:53:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 75943 ]] 00:06:54.300 05:53:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 75943 00:06:54.300 05:53:45 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 75943 ']' 00:06:54.300 05:53:45 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 75943 00:06:54.300 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (75943) - No such process 00:06:54.300 05:53:45 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 75943 is not found' 00:06:54.300 05:53:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:54.300 ************************************ 00:06:54.300 END TEST cpu_locks 00:06:54.300 ************************************ 00:06:54.300 00:06:54.300 real 0m18.846s 00:06:54.300 user 0m33.654s 00:06:54.300 sys 0m4.913s 00:06:54.300 05:53:45 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.300 05:53:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.300 05:53:45 event -- common/autotest_common.sh@1142 -- # return 0 00:06:54.300 ************************************ 00:06:54.300 END TEST event 00:06:54.300 ************************************ 00:06:54.300 00:06:54.300 real 0m46.027s 00:06:54.300 user 1m30.034s 00:06:54.300 sys 0m8.312s 00:06:54.300 05:53:45 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.300 05:53:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.300 05:53:45 -- common/autotest_common.sh@1142 -- # return 0 00:06:54.300 05:53:45 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:54.300 05:53:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:54.300 05:53:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.300 05:53:45 -- common/autotest_common.sh@10 -- # set +x 00:06:54.300 ************************************ 00:06:54.300 START TEST thread 00:06:54.300 ************************************ 00:06:54.300 05:53:45 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:54.559 * Looking for test storage... 00:06:54.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:54.559 05:53:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:54.559 05:53:46 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:54.559 05:53:46 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.559 05:53:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.559 ************************************ 00:06:54.559 START TEST thread_poller_perf 00:06:54.559 ************************************ 00:06:54.559 05:53:46 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:54.559 [2024-07-13 05:53:46.102522] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:54.559 [2024-07-13 05:53:46.102847] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76068 ] 00:06:54.559 [2024-07-13 05:53:46.245112] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.559 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:54.559 [2024-07-13 05:53:46.278904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.934 ====================================== 00:06:55.934 busy:2213860146 (cyc) 00:06:55.934 total_run_count: 312000 00:06:55.934 tsc_hz: 2200000000 (cyc) 00:06:55.934 ====================================== 00:06:55.934 poller_cost: 7095 (cyc), 3225 (nsec) 00:06:55.934 00:06:55.934 real 0m1.291s 00:06:55.934 user 0m1.124s 00:06:55.934 sys 0m0.061s 00:06:55.934 05:53:47 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.934 05:53:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.934 ************************************ 00:06:55.934 END TEST thread_poller_perf 00:06:55.934 ************************************ 00:06:55.934 05:53:47 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:55.934 05:53:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:55.934 05:53:47 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:55.934 05:53:47 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.934 05:53:47 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.934 ************************************ 00:06:55.934 START TEST thread_poller_perf 00:06:55.934 ************************************ 00:06:55.934 05:53:47 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:55.934 [2024-07-13 05:53:47.455224] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:55.934 [2024-07-13 05:53:47.455450] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76099 ] 00:06:55.934 [2024-07-13 05:53:47.604242] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.934 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:55.934 [2024-07-13 05:53:47.642513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.311 ====================================== 00:06:57.311 busy:2204222196 (cyc) 00:06:57.311 total_run_count: 4490000 00:06:57.311 tsc_hz: 2200000000 (cyc) 00:06:57.311 ====================================== 00:06:57.311 poller_cost: 490 (cyc), 222 (nsec) 00:06:57.311 ************************************ 00:06:57.311 END TEST thread_poller_perf 00:06:57.311 ************************************ 00:06:57.311 00:06:57.311 real 0m1.292s 00:06:57.311 user 0m1.107s 00:06:57.311 sys 0m0.078s 00:06:57.311 05:53:48 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.311 05:53:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:57.311 05:53:48 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:57.311 05:53:48 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:57.311 ************************************ 00:06:57.311 END TEST thread 00:06:57.311 ************************************ 00:06:57.311 00:06:57.311 real 0m2.771s 00:06:57.311 user 0m2.305s 00:06:57.311 sys 0m0.248s 00:06:57.311 05:53:48 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.311 05:53:48 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.311 05:53:48 -- common/autotest_common.sh@1142 -- # return 0 00:06:57.311 05:53:48 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:57.311 05:53:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:57.311 05:53:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.311 05:53:48 -- common/autotest_common.sh@10 -- # set +x 00:06:57.311 ************************************ 00:06:57.311 START TEST accel 00:06:57.311 ************************************ 00:06:57.311 05:53:48 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:57.311 * Looking for test storage... 00:06:57.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:57.311 05:53:48 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:57.311 05:53:48 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:57.311 05:53:48 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:57.311 05:53:48 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=76180 00:06:57.311 05:53:48 accel -- accel/accel.sh@63 -- # waitforlisten 76180 00:06:57.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.311 05:53:48 accel -- common/autotest_common.sh@829 -- # '[' -z 76180 ']' 00:06:57.311 05:53:48 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.311 05:53:48 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.311 05:53:48 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.311 05:53:48 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.311 05:53:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.311 05:53:48 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:57.311 05:53:48 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:57.311 05:53:48 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.311 05:53:48 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.311 05:53:48 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.311 05:53:48 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.311 05:53:48 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.311 05:53:48 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:57.311 05:53:48 accel -- accel/accel.sh@41 -- # jq -r . 00:06:57.311 [2024-07-13 05:53:48.989624] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:57.311 [2024-07-13 05:53:48.989837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76180 ] 00:06:57.570 [2024-07-13 05:53:49.136405] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.570 [2024-07-13 05:53:49.168453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.507 05:53:49 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.507 05:53:49 accel -- common/autotest_common.sh@862 -- # return 0 00:06:58.507 05:53:49 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:58.507 05:53:49 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:58.507 05:53:49 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:58.507 05:53:49 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:58.507 05:53:49 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:58.507 05:53:49 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:58.507 05:53:49 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.507 05:53:49 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:58.507 05:53:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.507 05:53:49 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.507 05:53:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.507 05:53:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.507 05:53:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.507 05:53:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.507 05:53:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.507 05:53:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.507 05:53:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.507 05:53:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.507 05:53:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.507 05:53:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.507 05:53:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.507 05:53:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.507 05:53:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.507 05:53:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.507 05:53:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.507 05:53:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.507 05:53:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.507 05:53:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.507 05:53:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.507 05:53:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.507 05:53:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.507 05:53:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.507 05:53:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.507 05:53:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.507 05:53:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.507 05:53:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.507 05:53:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.507 05:53:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.507 05:53:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.507 05:53:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.507 05:53:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.507 05:53:49 accel -- accel/accel.sh@75 -- # killprocess 76180 00:06:58.507 05:53:49 accel -- common/autotest_common.sh@948 -- # '[' -z 76180 ']' 00:06:58.507 05:53:49 accel -- common/autotest_common.sh@952 -- # kill -0 76180 00:06:58.507 05:53:49 accel -- common/autotest_common.sh@953 -- # uname 00:06:58.507 05:53:49 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:58.507 05:53:49 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76180 00:06:58.507 05:53:49 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:58.507 05:53:49 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:58.507 05:53:49 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76180' 00:06:58.507 killing process with pid 76180 00:06:58.507 05:53:49 accel -- common/autotest_common.sh@967 -- # kill 76180 00:06:58.507 05:53:49 accel -- common/autotest_common.sh@972 -- # wait 76180 00:06:58.766 05:53:50 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:58.766 05:53:50 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:58.766 05:53:50 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:58.766 05:53:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.766 05:53:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.766 05:53:50 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:58.766 05:53:50 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:58.766 05:53:50 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:58.766 05:53:50 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.766 05:53:50 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.766 05:53:50 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.766 05:53:50 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.766 05:53:50 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.766 05:53:50 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:58.766 05:53:50 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:58.766 05:53:50 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.766 05:53:50 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:58.766 05:53:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.766 05:53:50 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:58.766 05:53:50 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:58.766 05:53:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.766 05:53:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.766 ************************************ 00:06:58.766 START TEST accel_missing_filename 00:06:58.766 ************************************ 00:06:58.766 05:53:50 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:58.766 05:53:50 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:58.766 05:53:50 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:58.766 05:53:50 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:58.766 05:53:50 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.766 05:53:50 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:58.766 05:53:50 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.766 05:53:50 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:58.766 05:53:50 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:58.766 05:53:50 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:58.766 05:53:50 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.766 05:53:50 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.766 05:53:50 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.766 05:53:50 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.766 05:53:50 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.766 05:53:50 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:58.766 05:53:50 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:58.766 [2024-07-13 05:53:50.441901] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:58.766 [2024-07-13 05:53:50.442078] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76228 ] 00:06:59.025 [2024-07-13 05:53:50.590610] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.025 [2024-07-13 05:53:50.624852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.025 [2024-07-13 05:53:50.658939] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:59.025 [2024-07-13 05:53:50.707217] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:59.285 A filename is required. 00:06:59.285 ************************************ 00:06:59.285 END TEST accel_missing_filename 00:06:59.285 ************************************ 00:06:59.285 05:53:50 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:59.285 05:53:50 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:59.285 05:53:50 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:59.285 05:53:50 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:59.285 05:53:50 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:59.285 05:53:50 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:59.285 00:06:59.285 real 0m0.391s 00:06:59.285 user 0m0.224s 00:06:59.285 sys 0m0.112s 00:06:59.285 05:53:50 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.285 05:53:50 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:59.285 05:53:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:59.285 05:53:50 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:59.285 05:53:50 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:59.285 05:53:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.285 05:53:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.285 ************************************ 00:06:59.285 START TEST accel_compress_verify 00:06:59.285 ************************************ 00:06:59.285 05:53:50 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:59.285 05:53:50 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:59.285 05:53:50 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:59.285 05:53:50 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:59.285 05:53:50 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.285 05:53:50 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:59.285 05:53:50 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.285 05:53:50 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:59.285 05:53:50 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:59.285 05:53:50 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:59.285 05:53:50 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.285 05:53:50 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.285 05:53:50 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.285 05:53:50 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.285 05:53:50 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.285 05:53:50 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:59.285 05:53:50 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:59.285 [2024-07-13 05:53:50.894630] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:59.285 [2024-07-13 05:53:50.894917] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76254 ] 00:06:59.544 [2024-07-13 05:53:51.055912] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.544 [2024-07-13 05:53:51.090268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.544 [2024-07-13 05:53:51.121420] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:59.544 [2024-07-13 05:53:51.167333] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:59.544 00:06:59.544 Compression does not support the verify option, aborting. 00:06:59.544 05:53:51 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:59.544 05:53:51 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:59.544 05:53:51 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:59.544 05:53:51 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:59.544 05:53:51 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:59.544 ************************************ 00:06:59.544 END TEST accel_compress_verify 00:06:59.544 ************************************ 00:06:59.544 05:53:51 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:59.544 00:06:59.544 real 0m0.405s 00:06:59.544 user 0m0.226s 00:06:59.544 sys 0m0.124s 00:06:59.544 05:53:51 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.544 05:53:51 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:59.804 05:53:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:59.804 05:53:51 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:59.804 05:53:51 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:59.804 05:53:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.804 05:53:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.804 ************************************ 00:06:59.804 START TEST accel_wrong_workload 00:06:59.804 ************************************ 00:06:59.804 05:53:51 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:59.805 05:53:51 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:59.805 05:53:51 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:59.805 05:53:51 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:59.805 05:53:51 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.805 05:53:51 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:59.805 05:53:51 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.805 05:53:51 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:59.805 05:53:51 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:59.805 05:53:51 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:59.805 05:53:51 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.805 05:53:51 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.805 05:53:51 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.805 05:53:51 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.805 05:53:51 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.805 05:53:51 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:59.805 05:53:51 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:59.805 Unsupported workload type: foobar 00:06:59.805 [2024-07-13 05:53:51.326865] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:59.805 accel_perf options: 00:06:59.805 [-h help message] 00:06:59.805 [-q queue depth per core] 00:06:59.805 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:59.805 [-T number of threads per core 00:06:59.805 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:59.805 [-t time in seconds] 00:06:59.805 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:59.805 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:59.805 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:59.805 [-l for compress/decompress workloads, name of uncompressed input file 00:06:59.805 [-S for crc32c workload, use this seed value (default 0) 00:06:59.805 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:59.805 [-f for fill workload, use this BYTE value (default 255) 00:06:59.805 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:59.805 [-y verify result if this switch is on] 00:06:59.805 [-a tasks to allocate per core (default: same value as -q)] 00:06:59.805 Can be used to spread operations across a wider range of memory. 00:06:59.805 ************************************ 00:06:59.805 END TEST accel_wrong_workload 00:06:59.805 ************************************ 00:06:59.805 05:53:51 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:59.805 05:53:51 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:59.805 05:53:51 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:59.805 05:53:51 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:59.805 00:06:59.805 real 0m0.055s 00:06:59.805 user 0m0.078s 00:06:59.805 sys 0m0.025s 00:06:59.805 05:53:51 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.805 05:53:51 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:59.805 05:53:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:59.805 05:53:51 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:59.805 05:53:51 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:59.805 05:53:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.805 05:53:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.805 ************************************ 00:06:59.805 START TEST accel_negative_buffers 00:06:59.805 ************************************ 00:06:59.805 05:53:51 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:59.805 05:53:51 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:59.805 05:53:51 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:59.805 05:53:51 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:59.805 05:53:51 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.805 05:53:51 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:59.805 05:53:51 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.805 05:53:51 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:59.805 05:53:51 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:59.805 05:53:51 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:59.805 05:53:51 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.805 05:53:51 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.805 05:53:51 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.805 05:53:51 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.805 05:53:51 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.805 05:53:51 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:59.805 05:53:51 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:59.805 -x option must be non-negative. 00:06:59.805 [2024-07-13 05:53:51.445240] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:59.805 accel_perf options: 00:06:59.805 [-h help message] 00:06:59.805 [-q queue depth per core] 00:06:59.805 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:59.805 [-T number of threads per core 00:06:59.805 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:59.805 [-t time in seconds] 00:06:59.805 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:59.805 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:59.805 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:59.805 [-l for compress/decompress workloads, name of uncompressed input file 00:06:59.805 [-S for crc32c workload, use this seed value (default 0) 00:06:59.805 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:59.805 [-f for fill workload, use this BYTE value (default 255) 00:06:59.805 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:59.805 [-y verify result if this switch is on] 00:06:59.805 [-a tasks to allocate per core (default: same value as -q)] 00:06:59.805 Can be used to spread operations across a wider range of memory. 00:06:59.805 05:53:51 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:59.805 05:53:51 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:59.805 05:53:51 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:59.805 05:53:51 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:59.805 00:06:59.805 real 0m0.072s 00:06:59.805 user 0m0.087s 00:06:59.805 sys 0m0.033s 00:06:59.805 05:53:51 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.805 ************************************ 00:06:59.805 END TEST accel_negative_buffers 00:06:59.805 ************************************ 00:06:59.805 05:53:51 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:59.805 05:53:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:59.805 05:53:51 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:59.805 05:53:51 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:59.805 05:53:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.805 05:53:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.065 ************************************ 00:07:00.065 START TEST accel_crc32c 00:07:00.065 ************************************ 00:07:00.065 05:53:51 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:00.065 05:53:51 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:00.065 05:53:51 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:00.065 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.065 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.065 05:53:51 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:00.065 05:53:51 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:00.065 05:53:51 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:00.065 05:53:51 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.065 05:53:51 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.065 05:53:51 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.065 05:53:51 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.065 05:53:51 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.065 05:53:51 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:00.065 05:53:51 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:00.065 [2024-07-13 05:53:51.579843] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:00.065 [2024-07-13 05:53:51.580839] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76315 ] 00:07:00.065 [2024-07-13 05:53:51.732473] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.065 [2024-07-13 05:53:51.765039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.324 05:53:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.325 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.325 05:53:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.260 05:53:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.260 05:53:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.260 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.260 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.260 05:53:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.260 05:53:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.260 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.260 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.260 05:53:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.260 05:53:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.260 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.260 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.260 05:53:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.260 05:53:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.260 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.260 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.260 05:53:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.260 05:53:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.260 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.260 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.260 05:53:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.260 05:53:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.260 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.260 05:53:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.260 ************************************ 00:07:01.260 END TEST accel_crc32c 00:07:01.260 ************************************ 00:07:01.260 05:53:52 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.260 05:53:52 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:01.260 05:53:52 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.260 00:07:01.260 real 0m1.390s 00:07:01.260 user 0m0.017s 00:07:01.260 sys 0m0.004s 00:07:01.260 05:53:52 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.260 05:53:52 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:01.260 05:53:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:01.260 05:53:52 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:01.260 05:53:52 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:01.260 05:53:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.260 05:53:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.260 ************************************ 00:07:01.260 START TEST accel_crc32c_C2 00:07:01.260 ************************************ 00:07:01.260 05:53:52 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:01.260 05:53:52 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.260 05:53:52 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:01.260 05:53:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.260 05:53:52 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:01.260 05:53:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.260 05:53:52 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:01.260 05:53:52 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.260 05:53:52 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.260 05:53:52 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.260 05:53:52 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.260 05:53:52 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.260 05:53:52 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.260 05:53:52 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:01.260 05:53:52 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:01.519 [2024-07-13 05:53:53.025570] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:01.519 [2024-07-13 05:53:53.025761] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76351 ] 00:07:01.519 [2024-07-13 05:53:53.170047] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.519 [2024-07-13 05:53:53.202239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.519 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:01.778 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.778 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.778 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.778 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:01.778 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.778 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.778 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.778 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:01.778 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.778 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.778 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.778 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.778 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.778 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.778 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.778 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:01.778 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.778 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.778 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.778 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.778 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.778 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.778 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.778 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.778 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.778 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.778 05:53:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.716 00:07:02.716 real 0m1.374s 00:07:02.716 user 0m1.179s 00:07:02.716 sys 0m0.099s 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.716 05:53:54 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:02.716 ************************************ 00:07:02.716 END TEST accel_crc32c_C2 00:07:02.716 ************************************ 00:07:02.716 05:53:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.716 05:53:54 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:02.716 05:53:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:02.716 05:53:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.716 05:53:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.716 ************************************ 00:07:02.716 START TEST accel_copy 00:07:02.716 ************************************ 00:07:02.716 05:53:54 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:02.716 05:53:54 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:02.716 05:53:54 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:02.716 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.716 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.716 05:53:54 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:02.716 05:53:54 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:02.716 05:53:54 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:02.716 05:53:54 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.716 05:53:54 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.716 05:53:54 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.716 05:53:54 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.716 05:53:54 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.716 05:53:54 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:02.716 05:53:54 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:02.975 [2024-07-13 05:53:54.444444] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:02.975 [2024-07-13 05:53:54.444663] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76386 ] 00:07:02.975 [2024-07-13 05:53:54.587226] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.975 [2024-07-13 05:53:54.619378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.975 05:53:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.352 05:53:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.352 05:53:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.352 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.352 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.352 05:53:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.352 05:53:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.352 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.352 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.352 05:53:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.352 05:53:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.352 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.352 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.352 05:53:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.352 05:53:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.352 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.352 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.352 05:53:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.352 05:53:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.352 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.352 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.352 05:53:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.352 05:53:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.352 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.352 05:53:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.352 05:53:55 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.352 05:53:55 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:04.352 05:53:55 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.352 00:07:04.352 real 0m1.367s 00:07:04.352 user 0m1.168s 00:07:04.352 sys 0m0.111s 00:07:04.352 05:53:55 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.352 ************************************ 00:07:04.352 END TEST accel_copy 00:07:04.352 05:53:55 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:04.352 ************************************ 00:07:04.352 05:53:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:04.352 05:53:55 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.352 05:53:55 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:04.352 05:53:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.352 05:53:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.352 ************************************ 00:07:04.352 START TEST accel_fill 00:07:04.352 ************************************ 00:07:04.352 05:53:55 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.352 05:53:55 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:04.352 05:53:55 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:04.352 05:53:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.352 05:53:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.352 05:53:55 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.352 05:53:55 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.352 05:53:55 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:04.352 05:53:55 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.352 05:53:55 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.352 05:53:55 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.352 05:53:55 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.352 05:53:55 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.352 05:53:55 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:04.352 05:53:55 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:04.352 [2024-07-13 05:53:55.864512] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:04.352 [2024-07-13 05:53:55.865181] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76416 ] 00:07:04.352 [2024-07-13 05:53:56.014219] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.352 [2024-07-13 05:53:56.055841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.611 05:53:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.550 05:53:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:05.550 05:53:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.550 05:53:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.550 05:53:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.550 05:53:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:05.550 05:53:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.550 05:53:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.550 05:53:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.550 05:53:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:05.550 05:53:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.550 05:53:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.550 05:53:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.550 05:53:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:05.550 05:53:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.550 05:53:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.550 05:53:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.550 05:53:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:05.550 05:53:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.550 05:53:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.550 05:53:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.550 05:53:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:05.550 05:53:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.550 05:53:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.550 05:53:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.550 05:53:57 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.550 05:53:57 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:05.550 05:53:57 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.550 00:07:05.550 real 0m1.388s 00:07:05.550 user 0m1.184s 00:07:05.550 sys 0m0.114s 00:07:05.550 05:53:57 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.550 05:53:57 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:05.550 ************************************ 00:07:05.550 END TEST accel_fill 00:07:05.550 ************************************ 00:07:05.550 05:53:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.550 05:53:57 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:05.550 05:53:57 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:05.550 05:53:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.550 05:53:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.550 ************************************ 00:07:05.550 START TEST accel_copy_crc32c 00:07:05.550 ************************************ 00:07:05.550 05:53:57 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:05.550 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:05.550 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:05.550 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.550 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.550 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:05.550 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:05.550 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:05.550 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.550 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.550 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.550 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.550 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.550 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:05.550 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:05.810 [2024-07-13 05:53:57.299043] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:05.810 [2024-07-13 05:53:57.299851] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76457 ] 00:07:05.810 [2024-07-13 05:53:57.443748] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.810 [2024-07-13 05:53:57.477218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.810 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.810 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.810 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.810 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.810 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.810 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.810 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.810 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.810 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:05.811 05:53:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.192 00:07:07.192 real 0m1.372s 00:07:07.192 user 0m1.174s 00:07:07.192 sys 0m0.108s 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.192 05:53:58 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:07.192 ************************************ 00:07:07.192 END TEST accel_copy_crc32c 00:07:07.192 ************************************ 00:07:07.192 05:53:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.192 05:53:58 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:07.192 05:53:58 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:07.192 05:53:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.192 05:53:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.192 ************************************ 00:07:07.192 START TEST accel_copy_crc32c_C2 00:07:07.192 ************************************ 00:07:07.192 05:53:58 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:07.192 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.192 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:07.192 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.192 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:07.192 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.192 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:07.192 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.192 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.192 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.192 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.192 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.192 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.192 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:07.192 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:07.192 [2024-07-13 05:53:58.725692] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:07.192 [2024-07-13 05:53:58.725865] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76487 ] 00:07:07.192 [2024-07-13 05:53:58.867656] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.192 [2024-07-13 05:53:58.907606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.452 05:53:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.412 00:07:08.412 real 0m1.383s 00:07:08.412 user 0m1.186s 00:07:08.412 sys 0m0.107s 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.412 05:54:00 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:08.412 ************************************ 00:07:08.412 END TEST accel_copy_crc32c_C2 00:07:08.412 ************************************ 00:07:08.412 05:54:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.412 05:54:00 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:08.412 05:54:00 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:08.412 05:54:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.412 05:54:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.412 ************************************ 00:07:08.412 START TEST accel_dualcast 00:07:08.412 ************************************ 00:07:08.412 05:54:00 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:08.412 05:54:00 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:08.412 05:54:00 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:08.412 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.412 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.412 05:54:00 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:08.412 05:54:00 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:08.412 05:54:00 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:08.412 05:54:00 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.412 05:54:00 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.412 05:54:00 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.412 05:54:00 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.412 05:54:00 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.412 05:54:00 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:08.412 05:54:00 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:08.683 [2024-07-13 05:54:00.157443] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:08.683 [2024-07-13 05:54:00.157671] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76523 ] 00:07:08.683 [2024-07-13 05:54:00.302073] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.683 [2024-07-13 05:54:00.334674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.683 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.683 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.683 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.683 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.683 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.683 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.683 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.683 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.683 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:08.683 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.683 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.683 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.683 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.684 05:54:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.060 05:54:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:10.060 05:54:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.060 05:54:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.060 05:54:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.060 05:54:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:10.060 05:54:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.060 05:54:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.060 05:54:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.060 05:54:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:10.060 05:54:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.060 05:54:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.060 05:54:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.060 05:54:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:10.060 05:54:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.060 05:54:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.060 05:54:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.060 05:54:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:10.060 05:54:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.060 05:54:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.060 05:54:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.060 05:54:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:10.060 05:54:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.060 05:54:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.060 05:54:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.060 05:54:01 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.060 05:54:01 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:10.060 05:54:01 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.060 00:07:10.060 real 0m1.374s 00:07:10.060 user 0m1.177s 00:07:10.060 sys 0m0.103s 00:07:10.060 ************************************ 00:07:10.060 END TEST accel_dualcast 00:07:10.060 ************************************ 00:07:10.060 05:54:01 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.060 05:54:01 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:10.060 05:54:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.060 05:54:01 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:10.060 05:54:01 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:10.060 05:54:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.060 05:54:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.060 ************************************ 00:07:10.060 START TEST accel_compare 00:07:10.060 ************************************ 00:07:10.060 05:54:01 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:10.060 05:54:01 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:10.060 05:54:01 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:10.060 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.060 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.060 05:54:01 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:10.060 05:54:01 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:10.060 05:54:01 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:10.060 05:54:01 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.060 05:54:01 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.060 05:54:01 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.060 05:54:01 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.060 05:54:01 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.060 05:54:01 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:10.060 05:54:01 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:10.060 [2024-07-13 05:54:01.583613] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:10.060 [2024-07-13 05:54:01.583806] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76558 ] 00:07:10.060 [2024-07-13 05:54:01.732980] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.060 [2024-07-13 05:54:01.777553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.320 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.320 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:10.320 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.320 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.320 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.320 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:10.320 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.320 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.320 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.320 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.320 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.320 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.320 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.320 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:10.320 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.320 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.320 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.320 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.320 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.320 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.320 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.320 05:54:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.320 05:54:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.320 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.320 05:54:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.259 05:54:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:11.259 05:54:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:11.259 05:54:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.259 05:54:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.259 05:54:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:11.259 05:54:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:11.259 05:54:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.259 05:54:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.259 05:54:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:11.259 05:54:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:11.259 05:54:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.259 05:54:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.259 05:54:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:11.259 05:54:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:11.259 05:54:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.259 05:54:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.259 05:54:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:11.259 05:54:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:11.259 05:54:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.259 05:54:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.259 05:54:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:11.259 05:54:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:11.259 05:54:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:11.259 05:54:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:11.259 05:54:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.259 05:54:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:11.259 05:54:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.259 00:07:11.259 real 0m1.409s 00:07:11.259 user 0m1.194s 00:07:11.259 sys 0m0.125s 00:07:11.259 ************************************ 00:07:11.259 END TEST accel_compare 00:07:11.259 ************************************ 00:07:11.259 05:54:02 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.259 05:54:02 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:11.518 05:54:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.518 05:54:02 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:11.518 05:54:02 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:11.518 05:54:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.518 05:54:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.518 ************************************ 00:07:11.518 START TEST accel_xor 00:07:11.518 ************************************ 00:07:11.518 05:54:03 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:11.518 05:54:03 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:11.518 05:54:03 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:11.518 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.518 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.518 05:54:03 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:11.518 05:54:03 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:11.518 05:54:03 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:11.518 05:54:03 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.518 05:54:03 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.518 05:54:03 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.518 05:54:03 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.518 05:54:03 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.518 05:54:03 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:11.518 05:54:03 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:11.518 [2024-07-13 05:54:03.050174] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:11.518 [2024-07-13 05:54:03.050383] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76588 ] 00:07:11.518 [2024-07-13 05:54:03.196346] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.518 [2024-07-13 05:54:03.233966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.775 05:54:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.709 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.709 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.709 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.709 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.709 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.709 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.709 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.709 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.709 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.709 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.709 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.709 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.709 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.709 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.709 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.709 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.709 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.709 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.709 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.709 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.709 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.709 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.709 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.709 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.709 05:54:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.709 05:54:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:12.709 05:54:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.709 00:07:12.709 real 0m1.405s 00:07:12.709 user 0m0.018s 00:07:12.709 sys 0m0.001s 00:07:12.709 05:54:04 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.709 05:54:04 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:12.709 ************************************ 00:07:12.709 END TEST accel_xor 00:07:12.709 ************************************ 00:07:12.967 05:54:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:12.967 05:54:04 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:12.967 05:54:04 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:12.967 05:54:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.967 05:54:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.967 ************************************ 00:07:12.967 START TEST accel_xor 00:07:12.967 ************************************ 00:07:12.967 05:54:04 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:12.967 05:54:04 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:12.967 05:54:04 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:12.967 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.967 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.967 05:54:04 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:12.967 05:54:04 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:12.967 05:54:04 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:12.967 05:54:04 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.967 05:54:04 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.967 05:54:04 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.967 05:54:04 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.967 05:54:04 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.967 05:54:04 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:12.967 05:54:04 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:12.967 [2024-07-13 05:54:04.508034] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:12.967 [2024-07-13 05:54:04.508518] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76624 ] 00:07:12.967 [2024-07-13 05:54:04.653295] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.967 [2024-07-13 05:54:04.685428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.226 05:54:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.181 05:54:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.181 05:54:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.181 05:54:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.181 05:54:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.181 05:54:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.181 05:54:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.181 05:54:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.181 05:54:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.181 05:54:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.181 05:54:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.181 05:54:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.181 05:54:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.181 05:54:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.181 05:54:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.181 05:54:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.181 05:54:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.181 05:54:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.181 05:54:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.181 05:54:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.181 05:54:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.181 05:54:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.181 05:54:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.181 05:54:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.181 05:54:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.181 05:54:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.181 05:54:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:14.181 05:54:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.181 00:07:14.181 real 0m1.385s 00:07:14.181 user 0m1.189s 00:07:14.181 sys 0m0.103s 00:07:14.181 ************************************ 00:07:14.181 END TEST accel_xor 00:07:14.181 ************************************ 00:07:14.181 05:54:05 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.181 05:54:05 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:14.181 05:54:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.181 05:54:05 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:14.181 05:54:05 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:14.181 05:54:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.181 05:54:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.181 ************************************ 00:07:14.181 START TEST accel_dif_verify 00:07:14.181 ************************************ 00:07:14.181 05:54:05 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:14.181 05:54:05 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:14.182 05:54:05 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:14.182 05:54:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.182 05:54:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.182 05:54:05 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:14.182 05:54:05 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:14.182 05:54:05 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:14.182 05:54:05 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.182 05:54:05 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.182 05:54:05 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.182 05:54:05 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.182 05:54:05 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.182 05:54:05 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:14.182 05:54:05 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:14.439 [2024-07-13 05:54:05.929605] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:14.439 [2024-07-13 05:54:05.929815] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76659 ] 00:07:14.439 [2024-07-13 05:54:06.065839] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.439 [2024-07-13 05:54:06.097985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.439 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.440 05:54:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.440 05:54:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.440 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.440 05:54:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:15.815 05:54:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:15.815 05:54:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:15.815 05:54:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:15.815 05:54:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:15.815 05:54:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:15.815 05:54:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:15.815 05:54:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:15.815 05:54:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:15.815 05:54:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:15.815 05:54:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:15.815 05:54:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:15.815 05:54:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:15.815 05:54:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:15.815 05:54:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:15.815 05:54:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:15.815 05:54:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:15.815 05:54:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:15.815 05:54:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:15.815 05:54:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:15.815 05:54:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:15.815 05:54:07 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:15.815 05:54:07 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:15.815 05:54:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:15.815 05:54:07 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:15.815 05:54:07 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.815 05:54:07 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:15.815 05:54:07 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.815 00:07:15.815 real 0m1.353s 00:07:15.815 user 0m1.169s 00:07:15.815 sys 0m0.097s 00:07:15.815 05:54:07 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.815 05:54:07 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:15.815 ************************************ 00:07:15.815 END TEST accel_dif_verify 00:07:15.815 ************************************ 00:07:15.815 05:54:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:15.815 05:54:07 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:15.815 05:54:07 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:15.815 05:54:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.815 05:54:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.815 ************************************ 00:07:15.815 START TEST accel_dif_generate 00:07:15.815 ************************************ 00:07:15.815 05:54:07 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:15.815 05:54:07 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:15.815 05:54:07 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:15.815 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.815 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.815 05:54:07 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:15.815 05:54:07 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:15.815 05:54:07 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:15.815 05:54:07 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.815 05:54:07 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.815 05:54:07 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.815 05:54:07 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.815 05:54:07 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.815 05:54:07 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:15.815 05:54:07 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:15.815 [2024-07-13 05:54:07.347835] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:15.815 [2024-07-13 05:54:07.348047] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76695 ] 00:07:15.815 [2024-07-13 05:54:07.495888] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.815 [2024-07-13 05:54:07.527416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.074 05:54:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.010 05:54:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.010 05:54:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.010 05:54:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.010 05:54:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.010 05:54:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.010 05:54:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.010 05:54:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.010 05:54:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.010 05:54:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.010 05:54:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.010 05:54:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.010 05:54:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.010 05:54:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.010 05:54:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.010 05:54:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.010 05:54:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.010 05:54:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.010 05:54:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.010 05:54:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.010 05:54:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.010 05:54:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.010 05:54:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.010 05:54:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.010 05:54:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.010 05:54:08 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.010 05:54:08 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:17.010 05:54:08 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.010 00:07:17.010 real 0m1.368s 00:07:17.010 user 0m1.175s 00:07:17.010 sys 0m0.107s 00:07:17.010 05:54:08 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.010 05:54:08 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:17.010 ************************************ 00:07:17.010 END TEST accel_dif_generate 00:07:17.010 ************************************ 00:07:17.010 05:54:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:17.010 05:54:08 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:17.010 05:54:08 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:17.010 05:54:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.010 05:54:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.010 ************************************ 00:07:17.010 START TEST accel_dif_generate_copy 00:07:17.010 ************************************ 00:07:17.010 05:54:08 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:17.010 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:17.010 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:17.010 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.010 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.010 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:17.010 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:17.010 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:17.010 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.010 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.010 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.010 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.010 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.010 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:17.010 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:17.268 [2024-07-13 05:54:08.770333] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:17.268 [2024-07-13 05:54:08.770534] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76725 ] 00:07:17.268 [2024-07-13 05:54:08.915510] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.268 [2024-07-13 05:54:08.946710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.268 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.268 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.269 05:54:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.645 00:07:18.645 real 0m1.381s 00:07:18.645 user 0m0.017s 00:07:18.645 sys 0m0.003s 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.645 ************************************ 00:07:18.645 END TEST accel_dif_generate_copy 00:07:18.645 ************************************ 00:07:18.645 05:54:10 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:18.645 05:54:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:18.645 05:54:10 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:18.645 05:54:10 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:18.645 05:54:10 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:18.645 05:54:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.645 05:54:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.645 ************************************ 00:07:18.645 START TEST accel_comp 00:07:18.645 ************************************ 00:07:18.645 05:54:10 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:18.645 05:54:10 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:18.645 05:54:10 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:18.645 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.645 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.645 05:54:10 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:18.645 05:54:10 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:18.645 05:54:10 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:18.645 05:54:10 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.645 05:54:10 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.645 05:54:10 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.645 05:54:10 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.645 05:54:10 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.645 05:54:10 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:18.645 05:54:10 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:18.645 [2024-07-13 05:54:10.201113] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:18.645 [2024-07-13 05:54:10.201398] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76760 ] 00:07:18.645 [2024-07-13 05:54:10.346970] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.904 [2024-07-13 05:54:10.379371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.904 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.904 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.904 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.904 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.904 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.904 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.904 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.904 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.904 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.905 05:54:10 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 05:54:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.840 05:54:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 05:54:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 05:54:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 05:54:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.840 05:54:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 05:54:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 05:54:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 05:54:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.840 05:54:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 05:54:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 05:54:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 05:54:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.840 05:54:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 05:54:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 05:54:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 05:54:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.840 05:54:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 05:54:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 05:54:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 05:54:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.840 05:54:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.840 05:54:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.840 05:54:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.840 05:54:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.840 05:54:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:19.840 05:54:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.840 00:07:19.840 real 0m1.377s 00:07:19.840 user 0m0.019s 00:07:19.840 sys 0m0.002s 00:07:19.840 05:54:11 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.840 ************************************ 00:07:19.840 END TEST accel_comp 00:07:19.840 ************************************ 00:07:19.840 05:54:11 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:20.097 05:54:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:20.097 05:54:11 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:20.097 05:54:11 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:20.097 05:54:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.097 05:54:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.097 ************************************ 00:07:20.097 START TEST accel_decomp 00:07:20.097 ************************************ 00:07:20.097 05:54:11 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:20.097 05:54:11 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:20.097 05:54:11 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:20.097 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.097 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.097 05:54:11 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:20.097 05:54:11 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:20.097 05:54:11 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:20.097 05:54:11 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.097 05:54:11 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.097 05:54:11 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.097 05:54:11 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.097 05:54:11 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.097 05:54:11 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:20.097 05:54:11 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:20.097 [2024-07-13 05:54:11.646807] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:20.098 [2024-07-13 05:54:11.647266] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76796 ] 00:07:20.098 [2024-07-13 05:54:11.799278] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.357 [2024-07-13 05:54:11.831901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.357 05:54:11 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.358 05:54:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.292 05:54:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.292 05:54:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.292 05:54:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.292 05:54:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.292 05:54:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.292 05:54:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.292 05:54:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.292 05:54:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.292 05:54:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.292 05:54:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.292 05:54:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.292 05:54:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.292 05:54:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.292 05:54:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.292 05:54:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.292 05:54:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.292 05:54:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.292 05:54:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.292 05:54:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.292 05:54:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.292 05:54:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.292 05:54:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.292 05:54:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.292 05:54:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.292 ************************************ 00:07:21.292 END TEST accel_decomp 00:07:21.292 ************************************ 00:07:21.292 05:54:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.292 05:54:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:21.292 05:54:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.292 00:07:21.292 real 0m1.394s 00:07:21.292 user 0m0.014s 00:07:21.292 sys 0m0.000s 00:07:21.292 05:54:12 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.292 05:54:12 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:21.551 05:54:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.551 05:54:13 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:21.551 05:54:13 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:21.551 05:54:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.551 05:54:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.551 ************************************ 00:07:21.551 START TEST accel_decomp_full 00:07:21.551 ************************************ 00:07:21.551 05:54:13 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:21.551 05:54:13 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:21.551 05:54:13 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:21.551 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.551 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.551 05:54:13 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:21.551 05:54:13 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:21.551 05:54:13 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:21.551 05:54:13 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.551 05:54:13 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.551 05:54:13 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.551 05:54:13 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.551 05:54:13 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.551 05:54:13 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:21.551 05:54:13 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:21.551 [2024-07-13 05:54:13.077250] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:21.551 [2024-07-13 05:54:13.077456] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76830 ] 00:07:21.551 [2024-07-13 05:54:13.227553] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.551 [2024-07-13 05:54:13.270252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.810 05:54:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.746 05:54:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.746 05:54:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.746 05:54:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.746 05:54:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.746 05:54:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.746 05:54:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.746 05:54:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.746 05:54:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.746 05:54:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.746 05:54:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.746 05:54:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.746 05:54:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.746 05:54:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.746 05:54:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.746 05:54:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.746 05:54:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.746 05:54:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.746 05:54:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.746 05:54:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.746 05:54:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.746 05:54:14 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:22.746 05:54:14 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.746 05:54:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.746 05:54:14 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.746 05:54:14 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.746 05:54:14 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:22.746 05:54:14 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.746 00:07:22.746 real 0m1.422s 00:07:22.746 user 0m1.206s 00:07:22.746 sys 0m0.123s 00:07:22.746 ************************************ 00:07:22.746 END TEST accel_decomp_full 00:07:22.746 ************************************ 00:07:22.746 05:54:14 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.746 05:54:14 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:23.005 05:54:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:23.005 05:54:14 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:23.005 05:54:14 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:23.005 05:54:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.005 05:54:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.005 ************************************ 00:07:23.005 START TEST accel_decomp_mcore 00:07:23.005 ************************************ 00:07:23.005 05:54:14 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:23.005 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:23.005 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:23.005 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.005 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.005 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:23.005 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:23.005 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:23.005 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.005 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.005 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.005 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.005 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.005 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:23.005 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:23.005 [2024-07-13 05:54:14.539946] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:23.005 [2024-07-13 05:54:14.540086] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76867 ] 00:07:23.005 [2024-07-13 05:54:14.676657] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:23.005 [2024-07-13 05:54:14.714723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.005 [2024-07-13 05:54:14.714783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.005 [2024-07-13 05:54:14.714856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.005 [2024-07-13 05:54:14.714923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.264 05:54:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.240 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.241 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.241 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.241 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:24.241 05:54:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.241 00:07:24.241 real 0m1.404s 00:07:24.241 user 0m4.475s 00:07:24.241 sys 0m0.123s 00:07:24.241 05:54:15 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.241 05:54:15 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:24.241 ************************************ 00:07:24.241 END TEST accel_decomp_mcore 00:07:24.241 ************************************ 00:07:24.241 05:54:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:24.241 05:54:15 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:24.241 05:54:15 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:24.241 05:54:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.241 05:54:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.241 ************************************ 00:07:24.241 START TEST accel_decomp_full_mcore 00:07:24.241 ************************************ 00:07:24.241 05:54:15 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:24.241 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:24.241 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:24.241 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.241 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.241 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:24.241 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:24.241 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:24.241 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.241 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.241 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.241 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.241 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.241 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:24.241 05:54:15 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:24.499 [2024-07-13 05:54:15.997741] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:24.499 [2024-07-13 05:54:15.997902] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76900 ] 00:07:24.499 [2024-07-13 05:54:16.137535] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:24.499 [2024-07-13 05:54:16.178289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.499 [2024-07-13 05:54:16.178378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.499 [2024-07-13 05:54:16.178475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.499 [2024-07-13 05:54:16.178539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.499 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.499 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.499 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.499 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.500 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.500 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.500 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.500 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.500 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.500 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.500 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.500 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.500 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:24.500 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.500 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.500 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.500 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.500 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.500 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.500 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.500 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.500 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.500 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.500 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.758 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:24.759 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.759 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.759 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.759 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.759 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.759 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.759 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.759 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.759 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.759 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.759 05:54:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.695 ************************************ 00:07:25.695 END TEST accel_decomp_full_mcore 00:07:25.695 ************************************ 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.695 00:07:25.695 real 0m1.412s 00:07:25.695 user 0m4.506s 00:07:25.695 sys 0m0.130s 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.695 05:54:17 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:25.695 05:54:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.695 05:54:17 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:25.695 05:54:17 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:25.696 05:54:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.696 05:54:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.696 ************************************ 00:07:25.696 START TEST accel_decomp_mthread 00:07:25.696 ************************************ 00:07:25.696 05:54:17 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:25.696 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:25.696 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:25.696 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.696 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.696 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:25.696 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:25.696 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:25.696 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.696 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.696 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.696 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.696 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.696 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:25.696 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:25.955 [2024-07-13 05:54:17.449404] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:25.955 [2024-07-13 05:54:17.449563] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76944 ] 00:07:25.955 [2024-07-13 05:54:17.586443] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.955 [2024-07-13 05:54:17.619966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.955 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.955 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.955 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.955 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.955 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.955 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.955 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.955 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.955 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.955 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.955 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.955 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.955 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:25.955 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.955 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.955 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.955 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.955 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.955 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.955 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.956 05:54:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.334 ************************************ 00:07:27.334 END TEST accel_decomp_mthread 00:07:27.334 ************************************ 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.334 00:07:27.334 real 0m1.365s 00:07:27.334 user 0m1.174s 00:07:27.334 sys 0m0.099s 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.334 05:54:18 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:27.334 05:54:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:27.334 05:54:18 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:27.334 05:54:18 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:27.334 05:54:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.334 05:54:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.334 ************************************ 00:07:27.334 START TEST accel_decomp_full_mthread 00:07:27.334 ************************************ 00:07:27.334 05:54:18 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:27.334 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:27.334 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:27.334 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.334 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.334 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:27.334 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:27.334 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:27.334 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.334 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.334 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.334 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.334 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.334 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:27.334 05:54:18 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:27.334 [2024-07-13 05:54:18.877427] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:27.334 [2024-07-13 05:54:18.877599] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76974 ] 00:07:27.334 [2024-07-13 05:54:19.023261] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.334 [2024-07-13 05:54:19.057370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.593 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.594 05:54:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.532 00:07:28.532 real 0m1.421s 00:07:28.532 user 0m1.215s 00:07:28.532 sys 0m0.115s 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.532 05:54:20 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:28.532 ************************************ 00:07:28.532 END TEST accel_decomp_full_mthread 00:07:28.532 ************************************ 00:07:28.791 05:54:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:28.791 05:54:20 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:28.791 05:54:20 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:28.791 05:54:20 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:28.791 05:54:20 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.791 05:54:20 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.791 05:54:20 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.791 05:54:20 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.791 05:54:20 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.791 05:54:20 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:28.791 05:54:20 accel -- accel/accel.sh@41 -- # jq -r . 00:07:28.791 05:54:20 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:28.791 05:54:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.791 05:54:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.791 ************************************ 00:07:28.791 START TEST accel_dif_functional_tests 00:07:28.791 ************************************ 00:07:28.791 05:54:20 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:28.791 [2024-07-13 05:54:20.412729] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:28.791 [2024-07-13 05:54:20.412917] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77005 ] 00:07:29.051 [2024-07-13 05:54:20.554748] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:29.051 [2024-07-13 05:54:20.593596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.051 [2024-07-13 05:54:20.593683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.051 [2024-07-13 05:54:20.593749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.051 00:07:29.051 00:07:29.051 CUnit - A unit testing framework for C - Version 2.1-3 00:07:29.051 http://cunit.sourceforge.net/ 00:07:29.051 00:07:29.051 00:07:29.051 Suite: accel_dif 00:07:29.051 Test: verify: DIF generated, GUARD check ...passed 00:07:29.051 Test: verify: DIF generated, APPTAG check ...passed 00:07:29.051 Test: verify: DIF generated, REFTAG check ...passed 00:07:29.051 Test: verify: DIF not generated, GUARD check ...passed 00:07:29.051 Test: verify: DIF not generated, APPTAG check ...passed 00:07:29.051 Test: verify: DIF not generated, REFTAG check ...[2024-07-13 05:54:20.649331] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:29.051 [2024-07-13 05:54:20.649471] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:29.051 passed 00:07:29.051 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:29.051 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-13 05:54:20.649543] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:29.051 passed 00:07:29.051 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:29.051 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:29.051 Test: verify: REFTAG_INIT correct, REFTAG check ...[2024-07-13 05:54:20.649747] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:29.051 passed 00:07:29.051 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed[2024-07-13 05:54:20.650145] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:29.051 00:07:29.051 Test: verify copy: DIF generated, GUARD check ...passed 00:07:29.051 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:29.051 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:29.051 Test: verify copy: DIF not generated, GUARD check ...passed 00:07:29.051 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-13 05:54:20.650567] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:29.051 [2024-07-13 05:54:20.650746] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:29.051 passed 00:07:29.051 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-13 05:54:20.650958] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:29.051 passed 00:07:29.051 Test: generate copy: DIF generated, GUARD check ...passed 00:07:29.051 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:29.051 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:29.051 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:29.051 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:29.051 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:29.051 Test: generate copy: iovecs-len validate ...passed 00:07:29.051 Test: generate copy: buffer alignment validate ...passed 00:07:29.051 00:07:29.051 [2024-07-13 05:54:20.651527] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:29.051 Run Summary: Type Total Ran Passed Failed Inactive 00:07:29.051 suites 1 1 n/a 0 0 00:07:29.051 tests 26 26 26 0 0 00:07:29.051 asserts 115 115 115 0 n/a 00:07:29.051 00:07:29.051 Elapsed time = 0.007 seconds 00:07:29.311 00:07:29.311 real 0m0.534s 00:07:29.311 user 0m0.567s 00:07:29.311 sys 0m0.173s 00:07:29.311 05:54:20 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.311 05:54:20 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:29.311 ************************************ 00:07:29.311 END TEST accel_dif_functional_tests 00:07:29.311 ************************************ 00:07:29.311 05:54:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:29.311 00:07:29.311 real 0m32.099s 00:07:29.311 user 0m33.556s 00:07:29.311 sys 0m3.843s 00:07:29.311 05:54:20 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.311 ************************************ 00:07:29.311 END TEST accel 00:07:29.311 ************************************ 00:07:29.311 05:54:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.311 05:54:20 -- common/autotest_common.sh@1142 -- # return 0 00:07:29.311 05:54:20 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:29.311 05:54:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:29.311 05:54:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.311 05:54:20 -- common/autotest_common.sh@10 -- # set +x 00:07:29.311 ************************************ 00:07:29.311 START TEST accel_rpc 00:07:29.311 ************************************ 00:07:29.311 05:54:20 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:29.311 * Looking for test storage... 00:07:29.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:29.311 05:54:21 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:29.311 05:54:21 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=77076 00:07:29.311 05:54:21 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 77076 00:07:29.311 05:54:21 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 77076 ']' 00:07:29.311 05:54:21 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.311 05:54:21 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:29.311 05:54:21 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:29.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.311 05:54:21 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.311 05:54:21 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:29.311 05:54:21 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.570 [2024-07-13 05:54:21.139861] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:29.570 [2024-07-13 05:54:21.140074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77076 ] 00:07:29.570 [2024-07-13 05:54:21.290633] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.829 [2024-07-13 05:54:21.330015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.398 05:54:22 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:30.398 05:54:22 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:30.398 05:54:22 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:30.398 05:54:22 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:30.398 05:54:22 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:30.398 05:54:22 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:30.398 05:54:22 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:30.398 05:54:22 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:30.398 05:54:22 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.398 05:54:22 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.398 ************************************ 00:07:30.398 START TEST accel_assign_opcode 00:07:30.398 ************************************ 00:07:30.398 05:54:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:30.398 05:54:22 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:30.398 05:54:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.398 05:54:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:30.398 [2024-07-13 05:54:22.079002] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:30.398 05:54:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.398 05:54:22 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:30.398 05:54:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.398 05:54:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:30.398 [2024-07-13 05:54:22.091050] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:30.398 05:54:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.398 05:54:22 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:30.398 05:54:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.398 05:54:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:30.658 05:54:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.658 05:54:22 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:30.658 05:54:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.658 05:54:22 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:30.658 05:54:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:30.658 05:54:22 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:30.658 05:54:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.658 software 00:07:30.658 ************************************ 00:07:30.658 END TEST accel_assign_opcode 00:07:30.658 ************************************ 00:07:30.658 00:07:30.658 real 0m0.213s 00:07:30.658 user 0m0.056s 00:07:30.658 sys 0m0.014s 00:07:30.658 05:54:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.658 05:54:22 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:30.658 05:54:22 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:30.658 05:54:22 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 77076 00:07:30.658 05:54:22 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 77076 ']' 00:07:30.658 05:54:22 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 77076 00:07:30.658 05:54:22 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:30.658 05:54:22 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:30.658 05:54:22 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77076 00:07:30.658 killing process with pid 77076 00:07:30.658 05:54:22 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:30.658 05:54:22 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:30.658 05:54:22 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77076' 00:07:30.658 05:54:22 accel_rpc -- common/autotest_common.sh@967 -- # kill 77076 00:07:30.658 05:54:22 accel_rpc -- common/autotest_common.sh@972 -- # wait 77076 00:07:31.226 ************************************ 00:07:31.226 END TEST accel_rpc 00:07:31.226 ************************************ 00:07:31.226 00:07:31.226 real 0m1.727s 00:07:31.226 user 0m1.860s 00:07:31.226 sys 0m0.418s 00:07:31.226 05:54:22 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.226 05:54:22 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.226 05:54:22 -- common/autotest_common.sh@1142 -- # return 0 00:07:31.226 05:54:22 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:31.226 05:54:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:31.226 05:54:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.226 05:54:22 -- common/autotest_common.sh@10 -- # set +x 00:07:31.226 ************************************ 00:07:31.226 START TEST app_cmdline 00:07:31.226 ************************************ 00:07:31.226 05:54:22 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:31.226 * Looking for test storage... 00:07:31.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.226 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:31.226 05:54:22 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:31.226 05:54:22 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=77170 00:07:31.226 05:54:22 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 77170 00:07:31.226 05:54:22 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:31.226 05:54:22 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 77170 ']' 00:07:31.226 05:54:22 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.226 05:54:22 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:31.226 05:54:22 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.226 05:54:22 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:31.226 05:54:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:31.226 [2024-07-13 05:54:22.915987] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:31.226 [2024-07-13 05:54:22.916202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77170 ] 00:07:31.484 [2024-07-13 05:54:23.064585] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.484 [2024-07-13 05:54:23.104069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.419 05:54:23 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:32.419 05:54:23 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:32.419 05:54:23 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:32.419 { 00:07:32.419 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:07:32.419 "fields": { 00:07:32.419 "major": 24, 00:07:32.419 "minor": 9, 00:07:32.419 "patch": 0, 00:07:32.419 "suffix": "-pre", 00:07:32.419 "commit": "719d03c6a" 00:07:32.419 } 00:07:32.419 } 00:07:32.419 05:54:24 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:32.419 05:54:24 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:32.419 05:54:24 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:32.419 05:54:24 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:32.419 05:54:24 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:32.419 05:54:24 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:32.419 05:54:24 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:32.419 05:54:24 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.419 05:54:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:32.419 05:54:24 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.677 05:54:24 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:32.677 05:54:24 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:32.677 05:54:24 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:32.677 05:54:24 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:32.677 05:54:24 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:32.677 05:54:24 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:32.677 05:54:24 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:32.677 05:54:24 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:32.677 05:54:24 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:32.677 05:54:24 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:32.677 05:54:24 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:32.677 05:54:24 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:32.678 05:54:24 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:32.678 05:54:24 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:32.678 request: 00:07:32.678 { 00:07:32.678 "method": "env_dpdk_get_mem_stats", 00:07:32.678 "req_id": 1 00:07:32.678 } 00:07:32.678 Got JSON-RPC error response 00:07:32.678 response: 00:07:32.678 { 00:07:32.678 "code": -32601, 00:07:32.678 "message": "Method not found" 00:07:32.678 } 00:07:32.678 05:54:24 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:32.678 05:54:24 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:32.678 05:54:24 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:32.678 05:54:24 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:32.678 05:54:24 app_cmdline -- app/cmdline.sh@1 -- # killprocess 77170 00:07:32.678 05:54:24 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 77170 ']' 00:07:32.678 05:54:24 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 77170 00:07:32.678 05:54:24 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:32.678 05:54:24 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:32.678 05:54:24 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77170 00:07:32.678 killing process with pid 77170 00:07:32.678 05:54:24 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:32.678 05:54:24 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:32.678 05:54:24 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77170' 00:07:32.678 05:54:24 app_cmdline -- common/autotest_common.sh@967 -- # kill 77170 00:07:32.678 05:54:24 app_cmdline -- common/autotest_common.sh@972 -- # wait 77170 00:07:33.246 00:07:33.246 real 0m1.956s 00:07:33.246 user 0m2.475s 00:07:33.246 sys 0m0.439s 00:07:33.246 ************************************ 00:07:33.246 END TEST app_cmdline 00:07:33.246 ************************************ 00:07:33.246 05:54:24 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.246 05:54:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:33.246 05:54:24 -- common/autotest_common.sh@1142 -- # return 0 00:07:33.246 05:54:24 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:33.246 05:54:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:33.246 05:54:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.246 05:54:24 -- common/autotest_common.sh@10 -- # set +x 00:07:33.246 ************************************ 00:07:33.246 START TEST version 00:07:33.246 ************************************ 00:07:33.246 05:54:24 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:33.246 * Looking for test storage... 00:07:33.246 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:33.246 05:54:24 version -- app/version.sh@17 -- # get_header_version major 00:07:33.246 05:54:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:33.246 05:54:24 version -- app/version.sh@14 -- # cut -f2 00:07:33.246 05:54:24 version -- app/version.sh@14 -- # tr -d '"' 00:07:33.246 05:54:24 version -- app/version.sh@17 -- # major=24 00:07:33.246 05:54:24 version -- app/version.sh@18 -- # get_header_version minor 00:07:33.247 05:54:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:33.247 05:54:24 version -- app/version.sh@14 -- # cut -f2 00:07:33.247 05:54:24 version -- app/version.sh@14 -- # tr -d '"' 00:07:33.247 05:54:24 version -- app/version.sh@18 -- # minor=9 00:07:33.247 05:54:24 version -- app/version.sh@19 -- # get_header_version patch 00:07:33.247 05:54:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:33.247 05:54:24 version -- app/version.sh@14 -- # cut -f2 00:07:33.247 05:54:24 version -- app/version.sh@14 -- # tr -d '"' 00:07:33.247 05:54:24 version -- app/version.sh@19 -- # patch=0 00:07:33.247 05:54:24 version -- app/version.sh@20 -- # get_header_version suffix 00:07:33.247 05:54:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:33.247 05:54:24 version -- app/version.sh@14 -- # cut -f2 00:07:33.247 05:54:24 version -- app/version.sh@14 -- # tr -d '"' 00:07:33.247 05:54:24 version -- app/version.sh@20 -- # suffix=-pre 00:07:33.247 05:54:24 version -- app/version.sh@22 -- # version=24.9 00:07:33.247 05:54:24 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:33.247 05:54:24 version -- app/version.sh@28 -- # version=24.9rc0 00:07:33.247 05:54:24 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:33.247 05:54:24 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:33.247 05:54:24 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:33.247 05:54:24 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:33.247 00:07:33.247 real 0m0.147s 00:07:33.247 user 0m0.075s 00:07:33.247 sys 0m0.104s 00:07:33.247 ************************************ 00:07:33.247 END TEST version 00:07:33.247 ************************************ 00:07:33.247 05:54:24 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.247 05:54:24 version -- common/autotest_common.sh@10 -- # set +x 00:07:33.247 05:54:24 -- common/autotest_common.sh@1142 -- # return 0 00:07:33.247 05:54:24 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:33.247 05:54:24 -- spdk/autotest.sh@198 -- # uname -s 00:07:33.247 05:54:24 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:33.247 05:54:24 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:33.247 05:54:24 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:33.247 05:54:24 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:07:33.247 05:54:24 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:33.247 05:54:24 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:33.247 05:54:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.247 05:54:24 -- common/autotest_common.sh@10 -- # set +x 00:07:33.247 ************************************ 00:07:33.247 START TEST blockdev_nvme 00:07:33.247 ************************************ 00:07:33.247 05:54:24 blockdev_nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:33.506 * Looking for test storage... 00:07:33.506 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:33.506 05:54:25 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:33.506 05:54:25 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:33.506 05:54:25 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:33.506 05:54:25 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:33.506 05:54:25 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:33.506 05:54:25 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:33.506 05:54:25 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:33.506 05:54:25 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:33.506 05:54:25 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:33.506 05:54:25 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:07:33.506 05:54:25 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:07:33.506 05:54:25 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:07:33.506 05:54:25 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:07:33.506 05:54:25 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:07:33.506 05:54:25 blockdev_nvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:07:33.506 05:54:25 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:07:33.506 05:54:25 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:07:33.506 05:54:25 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:07:33.506 05:54:25 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:07:33.506 05:54:25 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:07:33.506 05:54:25 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:07:33.506 05:54:25 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:07:33.506 05:54:25 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:07:33.506 05:54:25 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:07:33.506 05:54:25 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=77315 00:07:33.506 05:54:25 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:33.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.506 05:54:25 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:33.506 05:54:25 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 77315 00:07:33.506 05:54:25 blockdev_nvme -- common/autotest_common.sh@829 -- # '[' -z 77315 ']' 00:07:33.506 05:54:25 blockdev_nvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.506 05:54:25 blockdev_nvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:33.506 05:54:25 blockdev_nvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.506 05:54:25 blockdev_nvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:33.506 05:54:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:33.506 [2024-07-13 05:54:25.129124] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:33.506 [2024-07-13 05:54:25.129293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77315 ] 00:07:33.764 [2024-07-13 05:54:25.265799] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.764 [2024-07-13 05:54:25.302780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.697 05:54:26 blockdev_nvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:34.697 05:54:26 blockdev_nvme -- common/autotest_common.sh@862 -- # return 0 00:07:34.697 05:54:26 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:07:34.697 05:54:26 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:07:34.697 05:54:26 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:34.697 05:54:26 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:34.697 05:54:26 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:34.697 05:54:26 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:34.697 05:54:26 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.697 05:54:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:34.697 05:54:26 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.697 05:54:26 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:07:34.697 05:54:26 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.697 05:54:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:34.955 05:54:26 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.955 05:54:26 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:07:34.955 05:54:26 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:07:34.955 05:54:26 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.955 05:54:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:34.955 05:54:26 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.955 05:54:26 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:07:34.955 05:54:26 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.956 05:54:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:34.956 05:54:26 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.956 05:54:26 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:34.956 05:54:26 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.956 05:54:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:34.956 05:54:26 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.956 05:54:26 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:07:34.956 05:54:26 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:07:34.956 05:54:26 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:07:34.956 05:54:26 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.956 05:54:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:34.956 05:54:26 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.956 05:54:26 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:07:34.956 05:54:26 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:07:34.956 05:54:26 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "1bf99849-938b-4cfe-8fd8-a64877ba94f7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "1bf99849-938b-4cfe-8fd8-a64877ba94f7",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "c1fafc9d-3353-40b7-bdf9-54295e2119cf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "c1fafc9d-3353-40b7-bdf9-54295e2119cf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "c0226757-18ab-4c65-bf77-15719d50b412"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c0226757-18ab-4c65-bf77-15719d50b412",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "c08ebb60-8ef4-466d-8353-dd14c110f49c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c08ebb60-8ef4-466d-8353-dd14c110f49c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "883625b1-990a-494f-817e-17ddd6bb6749"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "883625b1-990a-494f-817e-17ddd6bb6749",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "53e0ac41-3ff0-42ee-b9e0-86484b3b4114"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "53e0ac41-3ff0-42ee-b9e0-86484b3b4114",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:34.956 05:54:26 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:07:34.956 05:54:26 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:07:34.956 05:54:26 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:07:34.956 05:54:26 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 77315 00:07:34.956 05:54:26 blockdev_nvme -- common/autotest_common.sh@948 -- # '[' -z 77315 ']' 00:07:34.956 05:54:26 blockdev_nvme -- common/autotest_common.sh@952 -- # kill -0 77315 00:07:34.956 05:54:26 blockdev_nvme -- common/autotest_common.sh@953 -- # uname 00:07:34.956 05:54:26 blockdev_nvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:34.956 05:54:26 blockdev_nvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77315 00:07:34.956 killing process with pid 77315 00:07:34.956 05:54:26 blockdev_nvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:34.956 05:54:26 blockdev_nvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:34.956 05:54:26 blockdev_nvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77315' 00:07:34.956 05:54:26 blockdev_nvme -- common/autotest_common.sh@967 -- # kill 77315 00:07:34.956 05:54:26 blockdev_nvme -- common/autotest_common.sh@972 -- # wait 77315 00:07:35.522 05:54:26 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:35.522 05:54:26 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:35.522 05:54:26 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:35.522 05:54:26 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.522 05:54:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:35.522 ************************************ 00:07:35.522 START TEST bdev_hello_world 00:07:35.522 ************************************ 00:07:35.522 05:54:26 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:35.522 [2024-07-13 05:54:27.074453] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:35.522 [2024-07-13 05:54:27.074638] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77388 ] 00:07:35.522 [2024-07-13 05:54:27.219391] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.811 [2024-07-13 05:54:27.255041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.071 [2024-07-13 05:54:27.635684] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:36.071 [2024-07-13 05:54:27.635731] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:36.071 [2024-07-13 05:54:27.635765] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:36.071 [2024-07-13 05:54:27.638038] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:36.071 [2024-07-13 05:54:27.638637] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:36.071 [2024-07-13 05:54:27.638694] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:36.071 [2024-07-13 05:54:27.638896] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:36.071 00:07:36.071 [2024-07-13 05:54:27.638946] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:36.329 00:07:36.329 real 0m0.840s 00:07:36.329 user 0m0.549s 00:07:36.329 sys 0m0.186s 00:07:36.329 ************************************ 00:07:36.329 END TEST bdev_hello_world 00:07:36.329 ************************************ 00:07:36.329 05:54:27 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.329 05:54:27 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:36.329 05:54:27 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:07:36.329 05:54:27 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:07:36.329 05:54:27 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:36.329 05:54:27 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.329 05:54:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.329 ************************************ 00:07:36.329 START TEST bdev_bounds 00:07:36.329 ************************************ 00:07:36.329 05:54:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:07:36.329 05:54:27 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=77419 00:07:36.329 Process bdevio pid: 77419 00:07:36.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.329 05:54:27 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:36.329 05:54:27 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:36.329 05:54:27 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 77419' 00:07:36.329 05:54:27 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 77419 00:07:36.329 05:54:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 77419 ']' 00:07:36.329 05:54:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.329 05:54:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:36.329 05:54:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.329 05:54:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:36.329 05:54:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:36.329 [2024-07-13 05:54:27.965852] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:36.329 [2024-07-13 05:54:27.966418] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77419 ] 00:07:36.586 [2024-07-13 05:54:28.113700] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.586 [2024-07-13 05:54:28.150192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.586 [2024-07-13 05:54:28.150279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.586 [2024-07-13 05:54:28.150349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.152 05:54:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:37.152 05:54:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:07:37.152 05:54:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:37.410 I/O targets: 00:07:37.410 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:37.410 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:37.410 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:37.410 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:37.410 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:37.410 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:37.410 00:07:37.410 00:07:37.411 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.411 http://cunit.sourceforge.net/ 00:07:37.411 00:07:37.411 00:07:37.411 Suite: bdevio tests on: Nvme3n1 00:07:37.411 Test: blockdev write read block ...passed 00:07:37.411 Test: blockdev write zeroes read block ...passed 00:07:37.411 Test: blockdev write zeroes read no split ...passed 00:07:37.411 Test: blockdev write zeroes read split ...passed 00:07:37.411 Test: blockdev write zeroes read split partial ...passed 00:07:37.411 Test: blockdev reset ...[2024-07-13 05:54:28.977749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:37.411 passed 00:07:37.411 Test: blockdev write read 8 blocks ...[2024-07-13 05:54:28.980281] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:37.411 passed 00:07:37.411 Test: blockdev write read size > 128k ...passed 00:07:37.411 Test: blockdev write read invalid size ...passed 00:07:37.411 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:37.411 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:37.411 Test: blockdev write read max offset ...passed 00:07:37.411 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:37.411 Test: blockdev writev readv 8 blocks ...passed 00:07:37.411 Test: blockdev writev readv 30 x 1block ...passed 00:07:37.411 Test: blockdev writev readv block ...passed 00:07:37.411 Test: blockdev writev readv size > 128k ...passed 00:07:37.411 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:37.411 Test: blockdev comparev and writev ...[2024-07-13 05:54:28.986766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2abc0e000 len:0x1000 00:07:37.411 [2024-07-13 05:54:28.986878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:37.411 passed 00:07:37.411 Test: blockdev nvme passthru rw ...passed 00:07:37.411 Test: blockdev nvme passthru vendor specific ...passed 00:07:37.411 Test: blockdev nvme admin passthru ...[2024-07-13 05:54:28.987828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:37.411 [2024-07-13 05:54:28.987902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:37.411 passed 00:07:37.411 Test: blockdev copy ...passed 00:07:37.411 Suite: bdevio tests on: Nvme2n3 00:07:37.411 Test: blockdev write read block ...passed 00:07:37.411 Test: blockdev write zeroes read block ...passed 00:07:37.411 Test: blockdev write zeroes read no split ...passed 00:07:37.411 Test: blockdev write zeroes read split ...passed 00:07:37.411 Test: blockdev write zeroes read split partial ...passed 00:07:37.411 Test: blockdev reset ...[2024-07-13 05:54:29.000218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:37.411 passed 00:07:37.411 Test: blockdev write read 8 blocks ...[2024-07-13 05:54:29.002891] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:37.411 passed 00:07:37.411 Test: blockdev write read size > 128k ...passed 00:07:37.411 Test: blockdev write read invalid size ...passed 00:07:37.411 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:37.411 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:37.411 Test: blockdev write read max offset ...passed 00:07:37.411 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:37.411 Test: blockdev writev readv 8 blocks ...passed 00:07:37.411 Test: blockdev writev readv 30 x 1block ...passed 00:07:37.411 Test: blockdev writev readv block ...passed 00:07:37.411 Test: blockdev writev readv size > 128k ...passed 00:07:37.411 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:37.411 Test: blockdev comparev and writev ...[2024-07-13 05:54:29.009187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b6e09000 len:0x1000 00:07:37.411 [2024-07-13 05:54:29.009280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:37.411 passed 00:07:37.411 Test: blockdev nvme passthru rw ...passed 00:07:37.411 Test: blockdev nvme passthru vendor specific ...passed 00:07:37.411 Test: blockdev nvme admin passthru ...[2024-07-13 05:54:29.010115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:37.411 [2024-07-13 05:54:29.010187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:37.411 passed 00:07:37.411 Test: blockdev copy ...passed 00:07:37.411 Suite: bdevio tests on: Nvme2n2 00:07:37.411 Test: blockdev write read block ...passed 00:07:37.411 Test: blockdev write zeroes read block ...passed 00:07:37.411 Test: blockdev write zeroes read no split ...passed 00:07:37.411 Test: blockdev write zeroes read split ...passed 00:07:37.411 Test: blockdev write zeroes read split partial ...passed 00:07:37.411 Test: blockdev reset ...[2024-07-13 05:54:29.022570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:37.411 passed 00:07:37.411 Test: blockdev write read 8 blocks ...[2024-07-13 05:54:29.025258] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:37.411 passed 00:07:37.411 Test: blockdev write read size > 128k ...passed 00:07:37.411 Test: blockdev write read invalid size ...passed 00:07:37.411 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:37.411 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:37.411 Test: blockdev write read max offset ...passed 00:07:37.411 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:37.411 Test: blockdev writev readv 8 blocks ...passed 00:07:37.411 Test: blockdev writev readv 30 x 1block ...passed 00:07:37.411 Test: blockdev writev readv block ...passed 00:07:37.411 Test: blockdev writev readv size > 128k ...passed 00:07:37.411 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:37.411 Test: blockdev comparev and writev ...[2024-07-13 05:54:29.031180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2abc06000 len:0x1000 00:07:37.411 [2024-07-13 05:54:29.031254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:37.411 passed 00:07:37.411 Test: blockdev nvme passthru rw ...passed 00:07:37.411 Test: blockdev nvme passthru vendor specific ...passed 00:07:37.411 Test: blockdev nvme admin passthru ...[2024-07-13 05:54:29.032035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:37.411 [2024-07-13 05:54:29.032081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:37.411 passed 00:07:37.411 Test: blockdev copy ...passed 00:07:37.411 Suite: bdevio tests on: Nvme2n1 00:07:37.411 Test: blockdev write read block ...passed 00:07:37.411 Test: blockdev write zeroes read block ...passed 00:07:37.411 Test: blockdev write zeroes read no split ...passed 00:07:37.411 Test: blockdev write zeroes read split ...passed 00:07:37.411 Test: blockdev write zeroes read split partial ...passed 00:07:37.411 Test: blockdev reset ...[2024-07-13 05:54:29.046623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:37.411 passed 00:07:37.411 Test: blockdev write read 8 blocks ...[2024-07-13 05:54:29.049263] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:37.411 passed 00:07:37.411 Test: blockdev write read size > 128k ...passed 00:07:37.411 Test: blockdev write read invalid size ...passed 00:07:37.411 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:37.411 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:37.411 Test: blockdev write read max offset ...passed 00:07:37.411 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:37.411 Test: blockdev writev readv 8 blocks ...passed 00:07:37.411 Test: blockdev writev readv 30 x 1block ...passed 00:07:37.411 Test: blockdev writev readv block ...passed 00:07:37.411 Test: blockdev writev readv size > 128k ...passed 00:07:37.411 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:37.411 Test: blockdev comparev and writev ...[2024-07-13 05:54:29.055575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2abc02000 len:0x1000 00:07:37.411 [2024-07-13 05:54:29.055648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:37.411 passed 00:07:37.411 Test: blockdev nvme passthru rw ...passed 00:07:37.411 Test: blockdev nvme passthru vendor specific ...passed 00:07:37.411 Test: blockdev nvme admin passthru ...[2024-07-13 05:54:29.056588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:37.411 [2024-07-13 05:54:29.056643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:37.411 passed 00:07:37.411 Test: blockdev copy ...passed 00:07:37.411 Suite: bdevio tests on: Nvme1n1 00:07:37.411 Test: blockdev write read block ...passed 00:07:37.411 Test: blockdev write zeroes read block ...passed 00:07:37.411 Test: blockdev write zeroes read no split ...passed 00:07:37.411 Test: blockdev write zeroes read split ...passed 00:07:37.411 Test: blockdev write zeroes read split partial ...passed 00:07:37.411 Test: blockdev reset ...[2024-07-13 05:54:29.071266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:37.411 passed 00:07:37.411 Test: blockdev write read 8 blocks ...[2024-07-13 05:54:29.073604] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:37.411 passed 00:07:37.411 Test: blockdev write read size > 128k ...passed 00:07:37.411 Test: blockdev write read invalid size ...passed 00:07:37.411 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:37.411 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:37.411 Test: blockdev write read max offset ...passed 00:07:37.411 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:37.411 Test: blockdev writev readv 8 blocks ...passed 00:07:37.411 Test: blockdev writev readv 30 x 1block ...passed 00:07:37.411 Test: blockdev writev readv block ...passed 00:07:37.411 Test: blockdev writev readv size > 128k ...passed 00:07:37.411 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:37.411 Test: blockdev comparev and writev ...[2024-07-13 05:54:29.079889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ab802000 len:0x1000 00:07:37.411 [2024-07-13 05:54:29.079964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:37.411 passed 00:07:37.411 Test: blockdev nvme passthru rw ...passed 00:07:37.411 Test: blockdev nvme passthru vendor specific ...passed 00:07:37.411 Test: blockdev nvme admin passthru ...[2024-07-13 05:54:29.080794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:37.411 [2024-07-13 05:54:29.080857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:37.411 passed 00:07:37.411 Test: blockdev copy ...passed 00:07:37.411 Suite: bdevio tests on: Nvme0n1 00:07:37.411 Test: blockdev write read block ...passed 00:07:37.411 Test: blockdev write zeroes read block ...passed 00:07:37.411 Test: blockdev write zeroes read no split ...passed 00:07:37.412 Test: blockdev write zeroes read split ...passed 00:07:37.412 Test: blockdev write zeroes read split partial ...passed 00:07:37.412 Test: blockdev reset ...[2024-07-13 05:54:29.095919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:37.412 passed 00:07:37.412 Test: blockdev write read 8 blocks ...[2024-07-13 05:54:29.098293] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:37.412 passed 00:07:37.412 Test: blockdev write read size > 128k ...passed 00:07:37.412 Test: blockdev write read invalid size ...passed 00:07:37.412 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:37.412 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:37.412 Test: blockdev write read max offset ...passed 00:07:37.412 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:37.412 Test: blockdev writev readv 8 blocks ...passed 00:07:37.412 Test: blockdev writev readv 30 x 1block ...passed 00:07:37.412 Test: blockdev writev readv block ...passed 00:07:37.412 Test: blockdev writev readv size > 128k ...passed 00:07:37.412 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:37.412 Test: blockdev comparev and writev ...passed 00:07:37.412 Test: blockdev nvme passthru rw ...[2024-07-13 05:54:29.103500] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:37.412 separate metadata which is not supported yet. 00:07:37.412 passed 00:07:37.412 Test: blockdev nvme passthru vendor specific ...[2024-07-13 05:54:29.104022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:37.412 [2024-07-13 05:54:29.104072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:37.412 passed 00:07:37.412 Test: blockdev nvme admin passthru ...passed 00:07:37.412 Test: blockdev copy ...passed 00:07:37.412 00:07:37.412 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.412 suites 6 6 n/a 0 0 00:07:37.412 tests 138 138 138 0 0 00:07:37.412 asserts 893 893 893 0 n/a 00:07:37.412 00:07:37.412 Elapsed time = 0.331 seconds 00:07:37.412 0 00:07:37.412 05:54:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 77419 00:07:37.412 05:54:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 77419 ']' 00:07:37.412 05:54:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 77419 00:07:37.412 05:54:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:07:37.412 05:54:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:37.671 05:54:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77419 00:07:37.671 05:54:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:37.671 05:54:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:37.671 05:54:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77419' 00:07:37.671 killing process with pid 77419 00:07:37.671 05:54:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 77419 00:07:37.671 05:54:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 77419 00:07:37.671 05:54:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:07:37.671 00:07:37.671 real 0m1.456s 00:07:37.671 user 0m3.689s 00:07:37.671 sys 0m0.306s 00:07:37.671 05:54:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.671 ************************************ 00:07:37.671 END TEST bdev_bounds 00:07:37.671 ************************************ 00:07:37.671 05:54:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:37.671 05:54:29 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:07:37.671 05:54:29 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:37.671 05:54:29 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:37.671 05:54:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.671 05:54:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:37.671 ************************************ 00:07:37.671 START TEST bdev_nbd 00:07:37.671 ************************************ 00:07:37.671 05:54:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:37.671 05:54:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:07:37.671 05:54:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:07:37.671 05:54:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.671 05:54:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:37.671 05:54:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:37.671 05:54:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:07:37.671 05:54:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:07:37.671 05:54:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:07:37.671 05:54:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:37.671 05:54:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:07:37.671 05:54:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:07:37.671 05:54:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:37.671 05:54:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:07:37.671 05:54:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:37.671 05:54:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:07:37.671 05:54:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=77462 00:07:37.671 05:54:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:37.671 05:54:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 77462 /var/tmp/spdk-nbd.sock 00:07:37.671 05:54:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:37.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:37.671 05:54:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 77462 ']' 00:07:37.671 05:54:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:37.671 05:54:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:37.671 05:54:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:37.930 05:54:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:37.930 05:54:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:37.930 [2024-07-13 05:54:29.488333] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:37.930 [2024-07-13 05:54:29.488551] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.930 [2024-07-13 05:54:29.638327] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.189 [2024-07-13 05:54:29.676979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.755 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:38.756 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:07:38.756 05:54:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:38.756 05:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.756 05:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:38.756 05:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:38.756 05:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:38.756 05:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.756 05:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:38.756 05:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:38.756 05:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:38.756 05:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:38.756 05:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:38.756 05:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:38.756 05:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:39.014 05:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:39.014 05:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:39.014 05:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:39.014 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:39.014 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:39.014 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:39.014 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:39.014 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:39.014 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:39.014 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:39.014 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:39.014 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.014 1+0 records in 00:07:39.014 1+0 records out 00:07:39.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000602944 s, 6.8 MB/s 00:07:39.014 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.014 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:39.014 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.014 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:39.014 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:39.014 05:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:39.014 05:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:39.014 05:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:39.273 05:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:39.273 05:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:39.273 05:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:39.273 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:39.273 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:39.273 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:39.273 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:39.273 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:39.273 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:39.273 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:39.273 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:39.273 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.273 1+0 records in 00:07:39.273 1+0 records out 00:07:39.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457164 s, 9.0 MB/s 00:07:39.273 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.273 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:39.273 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.273 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:39.273 05:54:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:39.273 05:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:39.273 05:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:39.273 05:54:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:39.532 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:39.532 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:39.532 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:39.532 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:07:39.532 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:39.532 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:39.532 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:39.532 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:07:39.532 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:39.532 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:39.532 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:39.532 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.532 1+0 records in 00:07:39.532 1+0 records out 00:07:39.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000690367 s, 5.9 MB/s 00:07:39.532 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.532 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:39.532 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.532 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:39.532 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:39.532 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:39.532 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:39.532 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:39.798 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:39.798 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:39.798 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:39.798 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:07:39.798 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:39.798 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:39.798 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:39.798 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:07:39.798 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:39.798 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:39.798 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:39.798 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.798 1+0 records in 00:07:39.798 1+0 records out 00:07:39.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000898717 s, 4.6 MB/s 00:07:39.798 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.798 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:39.798 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.798 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:39.798 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:39.798 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:39.798 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:39.798 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:40.056 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:40.056 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:40.056 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:40.056 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:07:40.056 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:40.056 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:40.056 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:40.056 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:07:40.056 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:40.056 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:40.056 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:40.056 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.056 1+0 records in 00:07:40.056 1+0 records out 00:07:40.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000875104 s, 4.7 MB/s 00:07:40.056 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.056 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:40.056 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.314 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:40.314 05:54:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:40.314 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:40.314 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:40.314 05:54:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:40.572 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:40.572 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:40.572 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:40.572 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:07:40.572 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:40.572 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:40.572 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:40.572 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:07:40.572 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:40.572 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:40.572 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:40.572 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.572 1+0 records in 00:07:40.572 1+0 records out 00:07:40.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00246493 s, 1.7 MB/s 00:07:40.572 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.572 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:40.572 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.572 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:40.572 05:54:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:40.572 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:40.572 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:40.572 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:40.831 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:40.831 { 00:07:40.831 "nbd_device": "/dev/nbd0", 00:07:40.831 "bdev_name": "Nvme0n1" 00:07:40.831 }, 00:07:40.831 { 00:07:40.831 "nbd_device": "/dev/nbd1", 00:07:40.831 "bdev_name": "Nvme1n1" 00:07:40.831 }, 00:07:40.831 { 00:07:40.831 "nbd_device": "/dev/nbd2", 00:07:40.831 "bdev_name": "Nvme2n1" 00:07:40.831 }, 00:07:40.831 { 00:07:40.831 "nbd_device": "/dev/nbd3", 00:07:40.831 "bdev_name": "Nvme2n2" 00:07:40.831 }, 00:07:40.831 { 00:07:40.831 "nbd_device": "/dev/nbd4", 00:07:40.831 "bdev_name": "Nvme2n3" 00:07:40.831 }, 00:07:40.831 { 00:07:40.831 "nbd_device": "/dev/nbd5", 00:07:40.831 "bdev_name": "Nvme3n1" 00:07:40.831 } 00:07:40.831 ]' 00:07:40.831 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:40.831 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:40.831 { 00:07:40.831 "nbd_device": "/dev/nbd0", 00:07:40.831 "bdev_name": "Nvme0n1" 00:07:40.831 }, 00:07:40.831 { 00:07:40.831 "nbd_device": "/dev/nbd1", 00:07:40.831 "bdev_name": "Nvme1n1" 00:07:40.831 }, 00:07:40.831 { 00:07:40.831 "nbd_device": "/dev/nbd2", 00:07:40.831 "bdev_name": "Nvme2n1" 00:07:40.831 }, 00:07:40.831 { 00:07:40.831 "nbd_device": "/dev/nbd3", 00:07:40.831 "bdev_name": "Nvme2n2" 00:07:40.831 }, 00:07:40.831 { 00:07:40.831 "nbd_device": "/dev/nbd4", 00:07:40.831 "bdev_name": "Nvme2n3" 00:07:40.831 }, 00:07:40.831 { 00:07:40.831 "nbd_device": "/dev/nbd5", 00:07:40.831 "bdev_name": "Nvme3n1" 00:07:40.831 } 00:07:40.831 ]' 00:07:40.831 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:40.831 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:40.831 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.831 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:40.831 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:40.831 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:40.831 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.831 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:41.090 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:41.090 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:41.090 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:41.090 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:41.090 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:41.090 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:41.090 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:41.090 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:41.090 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.090 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:41.348 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:41.348 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:41.348 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:41.348 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:41.348 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:41.348 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:41.348 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:41.348 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:41.348 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.348 05:54:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:41.624 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:41.624 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:41.624 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:41.624 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:41.624 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:41.624 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:41.624 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:41.624 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:41.624 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.624 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:41.883 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:41.883 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:41.883 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:41.883 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:41.883 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:41.883 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:41.883 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:41.883 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:41.883 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.883 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:42.142 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:42.142 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:42.142 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:42.142 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.142 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.142 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:42.143 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:42.143 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.143 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.143 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:42.401 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:42.401 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:42.401 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:42.401 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.401 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.401 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:42.401 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:42.401 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.401 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:42.401 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.401 05:54:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:42.660 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:42.660 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:42.660 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:42.660 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:42.660 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:42.660 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:42.660 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:42.660 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:42.660 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:42.660 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:42.660 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:42.660 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:42.660 05:54:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:42.660 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.660 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:42.660 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:42.660 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:42.660 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:42.660 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:42.660 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.660 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:42.660 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:42.660 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:42.660 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:42.660 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:42.660 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:42.660 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:42.660 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:42.919 /dev/nbd0 00:07:42.919 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:42.919 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:42.919 05:54:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:42.919 05:54:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:42.919 05:54:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:42.919 05:54:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:42.919 05:54:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:42.919 05:54:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:42.919 05:54:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:42.919 05:54:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:42.919 05:54:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:42.919 1+0 records in 00:07:42.919 1+0 records out 00:07:42.919 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588587 s, 7.0 MB/s 00:07:42.919 05:54:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.919 05:54:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:42.919 05:54:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.919 05:54:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:42.919 05:54:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:42.919 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:42.919 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:42.919 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:43.178 /dev/nbd1 00:07:43.178 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:43.178 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:43.178 05:54:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:43.178 05:54:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:43.178 05:54:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:43.178 05:54:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:43.178 05:54:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:43.178 05:54:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:43.178 05:54:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:43.178 05:54:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:43.178 05:54:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:43.178 1+0 records in 00:07:43.178 1+0 records out 00:07:43.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000867165 s, 4.7 MB/s 00:07:43.178 05:54:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.178 05:54:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:43.178 05:54:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.178 05:54:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:43.178 05:54:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:43.178 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:43.178 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:43.178 05:54:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:43.437 /dev/nbd10 00:07:43.437 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:43.437 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:43.437 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:07:43.437 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:43.437 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:43.437 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:43.437 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:07:43.437 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:43.437 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:43.437 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:43.437 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:43.437 1+0 records in 00:07:43.437 1+0 records out 00:07:43.437 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000670922 s, 6.1 MB/s 00:07:43.437 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.437 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:43.437 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.437 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:43.437 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:43.437 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:43.437 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:43.437 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:43.697 /dev/nbd11 00:07:43.697 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:43.697 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:43.697 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:07:43.697 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:43.697 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:43.697 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:43.697 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:07:43.697 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:43.697 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:43.697 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:43.697 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:43.697 1+0 records in 00:07:43.697 1+0 records out 00:07:43.697 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000610712 s, 6.7 MB/s 00:07:43.697 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.697 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:43.697 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.697 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:43.697 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:43.697 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:43.697 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:43.697 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:43.956 /dev/nbd12 00:07:43.956 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:43.956 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:43.956 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:07:43.956 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:43.956 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:43.956 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:43.956 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:07:43.956 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:43.956 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:43.956 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:43.956 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:43.956 1+0 records in 00:07:43.956 1+0 records out 00:07:43.956 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000625855 s, 6.5 MB/s 00:07:43.956 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.956 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:43.956 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.956 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:43.956 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:43.956 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:43.956 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:43.956 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:44.215 /dev/nbd13 00:07:44.215 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:44.215 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:44.215 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:07:44.215 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:07:44.215 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:44.215 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:44.215 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:07:44.215 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:07:44.215 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:44.215 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:44.215 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.215 1+0 records in 00:07:44.215 1+0 records out 00:07:44.215 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000857676 s, 4.8 MB/s 00:07:44.215 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.215 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:07:44.215 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:44.215 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:44.215 05:54:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:07:44.215 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:44.215 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:44.215 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:44.215 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.215 05:54:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:44.475 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:44.475 { 00:07:44.475 "nbd_device": "/dev/nbd0", 00:07:44.475 "bdev_name": "Nvme0n1" 00:07:44.475 }, 00:07:44.475 { 00:07:44.475 "nbd_device": "/dev/nbd1", 00:07:44.475 "bdev_name": "Nvme1n1" 00:07:44.475 }, 00:07:44.475 { 00:07:44.475 "nbd_device": "/dev/nbd10", 00:07:44.475 "bdev_name": "Nvme2n1" 00:07:44.475 }, 00:07:44.475 { 00:07:44.475 "nbd_device": "/dev/nbd11", 00:07:44.475 "bdev_name": "Nvme2n2" 00:07:44.475 }, 00:07:44.475 { 00:07:44.475 "nbd_device": "/dev/nbd12", 00:07:44.475 "bdev_name": "Nvme2n3" 00:07:44.475 }, 00:07:44.475 { 00:07:44.475 "nbd_device": "/dev/nbd13", 00:07:44.475 "bdev_name": "Nvme3n1" 00:07:44.475 } 00:07:44.475 ]' 00:07:44.475 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:44.475 { 00:07:44.475 "nbd_device": "/dev/nbd0", 00:07:44.475 "bdev_name": "Nvme0n1" 00:07:44.475 }, 00:07:44.475 { 00:07:44.475 "nbd_device": "/dev/nbd1", 00:07:44.475 "bdev_name": "Nvme1n1" 00:07:44.475 }, 00:07:44.475 { 00:07:44.475 "nbd_device": "/dev/nbd10", 00:07:44.475 "bdev_name": "Nvme2n1" 00:07:44.475 }, 00:07:44.475 { 00:07:44.475 "nbd_device": "/dev/nbd11", 00:07:44.475 "bdev_name": "Nvme2n2" 00:07:44.475 }, 00:07:44.475 { 00:07:44.475 "nbd_device": "/dev/nbd12", 00:07:44.475 "bdev_name": "Nvme2n3" 00:07:44.475 }, 00:07:44.475 { 00:07:44.475 "nbd_device": "/dev/nbd13", 00:07:44.475 "bdev_name": "Nvme3n1" 00:07:44.475 } 00:07:44.475 ]' 00:07:44.475 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:44.735 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:44.735 /dev/nbd1 00:07:44.735 /dev/nbd10 00:07:44.735 /dev/nbd11 00:07:44.735 /dev/nbd12 00:07:44.735 /dev/nbd13' 00:07:44.735 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:44.735 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:44.735 /dev/nbd1 00:07:44.735 /dev/nbd10 00:07:44.735 /dev/nbd11 00:07:44.735 /dev/nbd12 00:07:44.735 /dev/nbd13' 00:07:44.735 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:44.735 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:44.735 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:44.735 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:44.735 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:44.735 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:44.735 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:44.735 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:44.735 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:44.735 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:44.735 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:44.735 256+0 records in 00:07:44.735 256+0 records out 00:07:44.735 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00841211 s, 125 MB/s 00:07:44.735 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:44.735 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:44.735 256+0 records in 00:07:44.735 256+0 records out 00:07:44.735 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165027 s, 6.4 MB/s 00:07:44.735 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:44.735 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:44.994 256+0 records in 00:07:44.994 256+0 records out 00:07:44.994 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140022 s, 7.5 MB/s 00:07:44.994 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:44.994 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:45.253 256+0 records in 00:07:45.253 256+0 records out 00:07:45.253 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.178842 s, 5.9 MB/s 00:07:45.253 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.253 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:45.253 256+0 records in 00:07:45.253 256+0 records out 00:07:45.253 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154845 s, 6.8 MB/s 00:07:45.253 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.253 05:54:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:45.512 256+0 records in 00:07:45.512 256+0 records out 00:07:45.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146254 s, 7.2 MB/s 00:07:45.512 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.512 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:45.512 256+0 records in 00:07:45.512 256+0 records out 00:07:45.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168639 s, 6.2 MB/s 00:07:45.512 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:45.512 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:45.512 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:45.512 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:45.512 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:45.512 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:45.512 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:45.512 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.512 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:45.772 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.772 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:45.772 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.772 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:45.772 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.772 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:45.772 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.772 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:45.772 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.772 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:45.772 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:45.772 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:45.772 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.772 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:45.772 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:45.772 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:45.772 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.772 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:46.032 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:46.032 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:46.032 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:46.032 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.032 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.032 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:46.032 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.032 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.032 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.032 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:46.291 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:46.291 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:46.291 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:46.291 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.291 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.291 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:46.291 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.291 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.291 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.291 05:54:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:46.550 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:46.550 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:46.550 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:46.550 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.550 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.550 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:46.550 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.550 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.550 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.550 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:46.809 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:46.809 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:46.809 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:46.809 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.809 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.809 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:46.809 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.809 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.809 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.809 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:47.068 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:47.068 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:47.068 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:47.068 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.068 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.068 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:47.068 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.068 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.068 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.068 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:47.327 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:47.327 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:47.327 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:47.327 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.327 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.327 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:47.327 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.327 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.327 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:47.327 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.327 05:54:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:47.616 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:47.616 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:47.616 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:47.616 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:47.616 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:47.616 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:47.616 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:47.616 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:47.616 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:47.616 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:47.616 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:47.616 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:47.616 05:54:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:47.616 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.616 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:47.616 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:07:47.616 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:07:47.616 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:47.912 malloc_lvol_verify 00:07:47.912 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:48.171 3945a09f-d103-4cce-b543-db209face558 00:07:48.171 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:48.430 f351e029-9fe1-4079-b11b-5b9709692214 00:07:48.430 05:54:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:48.689 /dev/nbd0 00:07:48.689 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:07:48.689 mke2fs 1.46.5 (30-Dec-2021) 00:07:48.689 Discarding device blocks: 0/4096 done 00:07:48.689 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:48.689 00:07:48.689 Allocating group tables: 0/1 done 00:07:48.689 Writing inode tables: 0/1 done 00:07:48.689 Creating journal (1024 blocks): done 00:07:48.689 Writing superblocks and filesystem accounting information: 0/1 done 00:07:48.689 00:07:48.689 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:07:48.689 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:48.689 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.689 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:48.689 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:48.689 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:48.689 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.689 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:48.948 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:48.948 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:48.948 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:48.948 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.948 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.948 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:48.948 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:48.948 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.948 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:07:48.948 05:54:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:07:48.948 05:54:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 77462 00:07:48.948 05:54:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 77462 ']' 00:07:48.948 05:54:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 77462 00:07:48.948 05:54:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:07:48.948 05:54:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:48.948 05:54:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77462 00:07:48.948 05:54:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:48.948 05:54:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:48.948 05:54:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77462' 00:07:48.948 killing process with pid 77462 00:07:48.948 05:54:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 77462 00:07:48.948 05:54:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 77462 00:07:49.208 05:54:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:07:49.208 00:07:49.208 real 0m11.377s 00:07:49.208 user 0m16.529s 00:07:49.208 sys 0m3.868s 00:07:49.208 05:54:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.208 ************************************ 00:07:49.208 END TEST bdev_nbd 00:07:49.208 ************************************ 00:07:49.208 05:54:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:49.208 05:54:40 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:07:49.208 05:54:40 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:07:49.208 05:54:40 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:07:49.208 skipping fio tests on NVMe due to multi-ns failures. 00:07:49.208 05:54:40 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:49.208 05:54:40 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:49.208 05:54:40 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:49.208 05:54:40 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:07:49.208 05:54:40 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.208 05:54:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:49.208 ************************************ 00:07:49.208 START TEST bdev_verify 00:07:49.208 ************************************ 00:07:49.208 05:54:40 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:49.208 [2024-07-13 05:54:40.911863] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:49.208 [2024-07-13 05:54:40.912044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77853 ] 00:07:49.468 [2024-07-13 05:54:41.059376] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:49.468 [2024-07-13 05:54:41.098361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.468 [2024-07-13 05:54:41.098424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.036 Running I/O for 5 seconds... 00:07:55.305 00:07:55.305 Latency(us) 00:07:55.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.305 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:55.305 Verification LBA range: start 0x0 length 0xbd0bd 00:07:55.305 Nvme0n1 : 5.07 1514.53 5.92 0.00 0.00 84315.76 16562.73 94371.84 00:07:55.305 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:55.305 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:55.305 Nvme0n1 : 5.04 1522.91 5.95 0.00 0.00 83713.40 15728.64 80549.70 00:07:55.305 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:55.305 Verification LBA range: start 0x0 length 0xa0000 00:07:55.305 Nvme1n1 : 5.07 1513.94 5.91 0.00 0.00 84202.82 17039.36 91035.46 00:07:55.305 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:55.305 Verification LBA range: start 0xa0000 length 0xa0000 00:07:55.305 Nvme1n1 : 5.04 1522.35 5.95 0.00 0.00 83598.69 18588.39 73400.32 00:07:55.305 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:55.305 Verification LBA range: start 0x0 length 0x80000 00:07:55.305 Nvme2n1 : 5.07 1513.36 5.91 0.00 0.00 84080.20 16562.73 87222.46 00:07:55.305 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:55.305 Verification LBA range: start 0x80000 length 0x80000 00:07:55.305 Nvme2n1 : 5.07 1527.74 5.97 0.00 0.00 83100.82 7298.33 68634.07 00:07:55.305 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:55.305 Verification LBA range: start 0x0 length 0x80000 00:07:55.305 Nvme2n2 : 5.08 1512.78 5.91 0.00 0.00 83942.59 16920.20 83409.45 00:07:55.305 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:55.305 Verification LBA range: start 0x80000 length 0x80000 00:07:55.305 Nvme2n2 : 5.08 1537.05 6.00 0.00 0.00 82591.51 7626.01 66250.94 00:07:55.305 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:55.305 Verification LBA range: start 0x0 length 0x80000 00:07:55.305 Nvme2n3 : 5.08 1512.18 5.91 0.00 0.00 83796.82 16205.27 88652.33 00:07:55.305 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:55.305 Verification LBA range: start 0x80000 length 0x80000 00:07:55.305 Nvme2n3 : 5.08 1536.34 6.00 0.00 0.00 82455.27 8460.10 68634.07 00:07:55.305 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:55.305 Verification LBA range: start 0x0 length 0x20000 00:07:55.305 Nvme3n1 : 5.08 1511.62 5.90 0.00 0.00 83655.20 15847.80 92465.34 00:07:55.305 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:55.305 Verification LBA range: start 0x20000 length 0x20000 00:07:55.305 Nvme3n1 : 5.08 1535.74 6.00 0.00 0.00 82320.16 8877.15 71017.19 00:07:55.305 =================================================================================================================== 00:07:55.305 Total : 18260.53 71.33 0.00 0.00 83476.59 7298.33 94371.84 00:07:55.564 00:07:55.564 real 0m6.263s 00:07:55.564 user 0m11.694s 00:07:55.564 sys 0m0.215s 00:07:55.564 05:54:47 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.564 05:54:47 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:55.564 ************************************ 00:07:55.564 END TEST bdev_verify 00:07:55.564 ************************************ 00:07:55.564 05:54:47 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:07:55.564 05:54:47 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:55.564 05:54:47 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:07:55.564 05:54:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.564 05:54:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.564 ************************************ 00:07:55.564 START TEST bdev_verify_big_io 00:07:55.564 ************************************ 00:07:55.564 05:54:47 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:55.564 [2024-07-13 05:54:47.199834] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:55.564 [2024-07-13 05:54:47.199972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77946 ] 00:07:55.823 [2024-07-13 05:54:47.343316] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:55.823 [2024-07-13 05:54:47.388010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.823 [2024-07-13 05:54:47.388046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.391 Running I/O for 5 seconds... 00:08:02.974 00:08:02.974 Latency(us) 00:08:02.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:02.974 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:02.974 Verification LBA range: start 0x0 length 0xbd0b 00:08:02.974 Nvme0n1 : 5.67 131.15 8.20 0.00 0.00 939537.92 23473.80 1380307.32 00:08:02.974 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:02.974 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:02.974 Nvme0n1 : 5.59 140.33 8.77 0.00 0.00 864315.50 18826.71 884616.84 00:08:02.974 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:02.974 Verification LBA range: start 0x0 length 0xa000 00:08:02.974 Nvme1n1 : 5.74 138.79 8.67 0.00 0.00 865056.64 40989.79 999006.95 00:08:02.974 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:02.974 Verification LBA range: start 0xa000 length 0xa000 00:08:02.974 Nvme1n1 : 5.67 146.15 9.13 0.00 0.00 826174.96 46709.29 854112.81 00:08:02.974 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:02.974 Verification LBA range: start 0x0 length 0x8000 00:08:02.974 Nvme2n1 : 5.80 136.12 8.51 0.00 0.00 857029.96 63867.81 1456567.39 00:08:02.974 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:02.974 Verification LBA range: start 0x8000 length 0x8000 00:08:02.974 Nvme2n1 : 5.74 152.37 9.52 0.00 0.00 775685.74 14477.50 777852.74 00:08:02.974 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:02.974 Verification LBA range: start 0x0 length 0x8000 00:08:02.974 Nvme2n2 : 5.80 145.03 9.06 0.00 0.00 785631.20 53858.68 1082893.03 00:08:02.974 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:02.974 Verification LBA range: start 0x8000 length 0x8000 00:08:02.974 Nvme2n2 : 5.74 156.03 9.75 0.00 0.00 739547.69 57909.99 800730.76 00:08:02.974 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:02.974 Verification LBA range: start 0x0 length 0x8000 00:08:02.974 Nvme2n3 : 5.83 150.77 9.42 0.00 0.00 737171.62 27167.65 1113397.06 00:08:02.974 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:02.974 Verification LBA range: start 0x8000 length 0x8000 00:08:02.974 Nvme2n3 : 5.81 154.51 9.66 0.00 0.00 719152.23 42419.67 823608.79 00:08:02.974 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:02.974 Verification LBA range: start 0x0 length 0x2000 00:08:02.974 Nvme3n1 : 5.86 162.16 10.14 0.00 0.00 671647.40 2815.07 1563331.49 00:08:02.974 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:02.974 Verification LBA range: start 0x2000 length 0x2000 00:08:02.974 Nvme3n1 : 5.82 172.30 10.77 0.00 0.00 638868.93 2993.80 831234.79 00:08:02.974 =================================================================================================================== 00:08:02.974 Total : 1785.69 111.61 0.00 0.00 778010.97 2815.07 1563331.49 00:08:02.974 00:08:02.974 real 0m7.230s 00:08:02.974 user 0m13.658s 00:08:02.974 sys 0m0.213s 00:08:02.974 05:54:54 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.974 05:54:54 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:02.974 ************************************ 00:08:02.974 END TEST bdev_verify_big_io 00:08:02.974 ************************************ 00:08:02.974 05:54:54 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:08:02.974 05:54:54 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:02.974 05:54:54 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:02.974 05:54:54 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.974 05:54:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:02.974 ************************************ 00:08:02.974 START TEST bdev_write_zeroes 00:08:02.974 ************************************ 00:08:02.974 05:54:54 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:02.974 [2024-07-13 05:54:54.479884] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:02.974 [2024-07-13 05:54:54.480058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78044 ] 00:08:02.974 [2024-07-13 05:54:54.622874] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.974 [2024-07-13 05:54:54.667232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.541 Running I/O for 1 seconds... 00:08:04.476 00:08:04.476 Latency(us) 00:08:04.476 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.476 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:04.476 Nvme0n1 : 1.02 8558.30 33.43 0.00 0.00 14904.78 11141.12 25022.84 00:08:04.476 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:04.476 Nvme1n1 : 1.02 8544.70 33.38 0.00 0.00 14903.57 11379.43 25618.62 00:08:04.476 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:04.476 Nvme2n1 : 1.02 8531.59 33.33 0.00 0.00 14867.04 11439.01 23950.43 00:08:04.476 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:04.476 Nvme2n2 : 1.02 8568.79 33.47 0.00 0.00 14769.89 9175.04 20018.27 00:08:04.476 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:04.476 Nvme2n3 : 1.02 8555.77 33.42 0.00 0.00 14743.24 7596.22 19184.17 00:08:04.476 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:04.477 Nvme3n1 : 1.03 8542.98 33.37 0.00 0.00 14730.08 7417.48 19184.17 00:08:04.477 =================================================================================================================== 00:08:04.477 Total : 51302.13 200.40 0.00 0.00 14819.50 7417.48 25618.62 00:08:04.735 00:08:04.735 real 0m1.927s 00:08:04.735 user 0m1.627s 00:08:04.735 sys 0m0.184s 00:08:04.735 05:54:56 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.735 05:54:56 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:04.735 ************************************ 00:08:04.735 END TEST bdev_write_zeroes 00:08:04.735 ************************************ 00:08:04.735 05:54:56 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:08:04.735 05:54:56 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:04.735 05:54:56 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:04.735 05:54:56 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.736 05:54:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:04.736 ************************************ 00:08:04.736 START TEST bdev_json_nonenclosed 00:08:04.736 ************************************ 00:08:04.736 05:54:56 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:04.995 [2024-07-13 05:54:56.489684] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:04.995 [2024-07-13 05:54:56.489912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78081 ] 00:08:04.995 [2024-07-13 05:54:56.639822] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.995 [2024-07-13 05:54:56.682445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.995 [2024-07-13 05:54:56.682593] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:04.995 [2024-07-13 05:54:56.682633] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:04.995 [2024-07-13 05:54:56.682652] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:05.254 00:08:05.254 real 0m0.416s 00:08:05.254 user 0m0.205s 00:08:05.254 sys 0m0.107s 00:08:05.254 05:54:56 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:08:05.254 05:54:56 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.254 05:54:56 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:05.254 ************************************ 00:08:05.254 END TEST bdev_json_nonenclosed 00:08:05.254 ************************************ 00:08:05.254 05:54:56 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:08:05.254 05:54:56 blockdev_nvme -- bdev/blockdev.sh@782 -- # true 00:08:05.254 05:54:56 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:05.254 05:54:56 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:05.254 05:54:56 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.254 05:54:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:05.254 ************************************ 00:08:05.254 START TEST bdev_json_nonarray 00:08:05.254 ************************************ 00:08:05.254 05:54:56 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:05.254 [2024-07-13 05:54:56.958961] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:05.254 [2024-07-13 05:54:56.959192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78106 ] 00:08:05.513 [2024-07-13 05:54:57.111681] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.513 [2024-07-13 05:54:57.156276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.513 [2024-07-13 05:54:57.156437] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:05.513 [2024-07-13 05:54:57.156475] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:05.513 [2024-07-13 05:54:57.156493] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:05.773 00:08:05.773 real 0m0.441s 00:08:05.773 user 0m0.227s 00:08:05.773 sys 0m0.110s 00:08:05.773 05:54:57 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:08:05.773 05:54:57 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.773 ************************************ 00:08:05.773 END TEST bdev_json_nonarray 00:08:05.773 ************************************ 00:08:05.773 05:54:57 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:05.773 05:54:57 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:08:05.773 05:54:57 blockdev_nvme -- bdev/blockdev.sh@785 -- # true 00:08:05.773 05:54:57 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:08:05.773 05:54:57 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:08:05.773 05:54:57 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:08:05.773 05:54:57 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:08:05.773 05:54:57 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:08:05.773 05:54:57 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:05.773 05:54:57 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:05.773 05:54:57 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:08:05.773 05:54:57 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:08:05.773 05:54:57 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:08:05.773 05:54:57 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:08:05.773 00:08:05.773 real 0m32.422s 00:08:05.773 user 0m50.600s 00:08:05.773 sys 0m5.943s 00:08:05.773 05:54:57 blockdev_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.773 05:54:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:05.773 ************************************ 00:08:05.773 END TEST blockdev_nvme 00:08:05.773 ************************************ 00:08:05.773 05:54:57 -- common/autotest_common.sh@1142 -- # return 0 00:08:05.773 05:54:57 -- spdk/autotest.sh@213 -- # uname -s 00:08:05.773 05:54:57 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:08:05.773 05:54:57 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:05.773 05:54:57 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:05.773 05:54:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.773 05:54:57 -- common/autotest_common.sh@10 -- # set +x 00:08:05.773 ************************************ 00:08:05.773 START TEST blockdev_nvme_gpt 00:08:05.773 ************************************ 00:08:05.773 05:54:57 blockdev_nvme_gpt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:05.773 * Looking for test storage... 00:08:06.033 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:06.033 05:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:06.033 05:54:57 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:08:06.033 05:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:06.033 05:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:06.033 05:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:06.033 05:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:06.033 05:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:06.033 05:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:06.033 05:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:08:06.033 05:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:08:06.033 05:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:08:06.033 05:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:08:06.033 05:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # uname -s 00:08:06.033 05:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:08:06.033 05:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:08:06.033 05:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # test_type=gpt 00:08:06.033 05:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # crypto_device= 00:08:06.033 05:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # dek= 00:08:06.033 05:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # env_ctx= 00:08:06.033 05:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:08:06.033 05:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:08:06.033 05:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:08:06.033 05:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:08:06.033 05:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:08:06.033 05:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=78181 00:08:06.033 05:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:06.033 05:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 78181 00:08:06.033 05:54:57 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:06.033 05:54:57 blockdev_nvme_gpt -- common/autotest_common.sh@829 -- # '[' -z 78181 ']' 00:08:06.033 05:54:57 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.033 05:54:57 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:06.033 05:54:57 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.033 05:54:57 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:06.033 05:54:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:06.033 [2024-07-13 05:54:57.672787] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:06.033 [2024-07-13 05:54:57.673020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78181 ] 00:08:06.292 [2024-07-13 05:54:57.824864] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.292 [2024-07-13 05:54:57.870119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.859 05:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:06.859 05:54:58 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # return 0 00:08:06.859 05:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:08:06.859 05:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:08:06.859 05:54:58 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:07.119 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:07.378 Waiting for block devices as requested 00:08:07.378 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:07.636 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:07.636 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:07.636 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:12.903 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:12.903 05:55:04 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:12.903 05:55:04 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:12.903 05:55:04 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:11.0/nvme/nvme0/nvme0n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n2' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n3' '/sys/bus/pci/drivers/nvme/0000:00:13.0/nvme/nvme3/nvme3c3n1') 00:08:12.903 05:55:04 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:08:12.903 05:55:04 blockdev_nvme_gpt -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:08:12.903 05:55:04 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:12.903 05:55:04 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:08:12.903 05:55:04 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # dev=/dev/nvme1n1 00:08:12.903 05:55:04 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # parted /dev/nvme1n1 -ms print 00:08:12.903 05:55:04 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme1n1: unrecognised disk label 00:08:12.903 BYT; 00:08:12.903 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:12.903 05:55:04 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme1n1: unrecognised disk label 00:08:12.903 BYT; 00:08:12.903 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\1\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:12.903 05:55:04 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme1n1 00:08:12.903 05:55:04 blockdev_nvme_gpt -- bdev/blockdev.sh@116 -- # break 00:08:12.903 05:55:04 blockdev_nvme_gpt -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme1n1 ]] 00:08:12.903 05:55:04 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:12.903 05:55:04 blockdev_nvme_gpt -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:12.903 05:55:04 blockdev_nvme_gpt -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme1n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:12.903 05:55:04 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:08:12.903 05:55:04 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:08:12.903 05:55:04 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:12.903 05:55:04 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:12.903 05:55:04 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:08:12.903 05:55:04 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:08:12.903 05:55:04 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:12.903 05:55:04 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:12.903 05:55:04 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:12.903 05:55:04 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:12.903 05:55:04 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:12.903 05:55:04 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:08:12.903 05:55:04 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:08:12.903 05:55:04 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:12.903 05:55:04 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:12.903 05:55:04 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:08:12.903 05:55:04 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:08:12.903 05:55:04 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:12.904 05:55:04 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:12.904 05:55:04 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:12.904 05:55:04 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:12.904 05:55:04 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:12.904 05:55:04 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme1n1 00:08:13.839 The operation has completed successfully. 00:08:13.839 05:55:05 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme1n1 00:08:15.214 The operation has completed successfully. 00:08:15.214 05:55:06 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:15.472 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:16.038 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:16.038 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:16.038 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:16.038 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:16.297 05:55:07 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:08:16.297 05:55:07 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.297 05:55:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:16.297 [] 00:08:16.297 05:55:07 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.297 05:55:07 blockdev_nvme_gpt -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:08:16.297 05:55:07 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:08:16.297 05:55:07 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:16.297 05:55:07 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:16.297 05:55:07 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:16.297 05:55:07 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.297 05:55:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:16.555 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.555 05:55:08 blockdev_nvme_gpt -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:08:16.555 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.555 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:16.555 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.555 05:55:08 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # cat 00:08:16.555 05:55:08 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:08:16.555 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.555 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:16.555 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.555 05:55:08 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:08:16.555 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.555 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:16.555 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.555 05:55:08 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:16.555 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.555 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:16.555 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.555 05:55:08 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:08:16.555 05:55:08 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:08:16.555 05:55:08 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:08:16.555 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.556 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:16.814 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.814 05:55:08 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:08:16.814 05:55:08 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # jq -r .name 00:08:16.815 05:55:08 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "1850d504-177d-4904-bc24-487edf55edc4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "1850d504-177d-4904-bc24-487edf55edc4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "6cf7e9ac-455a-4115-94ef-0392343835f5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6cf7e9ac-455a-4115-94ef-0392343835f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "78d6f9e8-e852-4e97-81f8-311110a9b192"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "78d6f9e8-e852-4e97-81f8-311110a9b192",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "4001dad6-9bef-475d-9678-976c3fac276c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4001dad6-9bef-475d-9678-976c3fac276c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "42c8b625-3bb4-4630-9673-a2e9d46497f1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "42c8b625-3bb4-4630-9673-a2e9d46497f1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:16.815 05:55:08 blockdev_nvme_gpt -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:08:16.815 05:55:08 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:08:16.815 05:55:08 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:08:16.815 05:55:08 blockdev_nvme_gpt -- bdev/blockdev.sh@754 -- # killprocess 78181 00:08:16.815 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@948 -- # '[' -z 78181 ']' 00:08:16.815 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # kill -0 78181 00:08:16.815 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # uname 00:08:16.815 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:16.815 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78181 00:08:16.815 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:16.815 killing process with pid 78181 00:08:16.815 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:16.815 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78181' 00:08:16.815 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@967 -- # kill 78181 00:08:16.815 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # wait 78181 00:08:17.083 05:55:08 blockdev_nvme_gpt -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:17.083 05:55:08 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:17.083 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:08:17.083 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.083 05:55:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:17.083 ************************************ 00:08:17.083 START TEST bdev_hello_world 00:08:17.083 ************************************ 00:08:17.083 05:55:08 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:17.355 [2024-07-13 05:55:08.840297] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:17.355 [2024-07-13 05:55:08.840516] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78791 ] 00:08:17.355 [2024-07-13 05:55:08.987653] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.355 [2024-07-13 05:55:09.020482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.921 [2024-07-13 05:55:09.385060] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:17.921 [2024-07-13 05:55:09.385142] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:08:17.921 [2024-07-13 05:55:09.385178] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:17.921 [2024-07-13 05:55:09.387558] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:17.921 [2024-07-13 05:55:09.388184] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:17.922 [2024-07-13 05:55:09.388219] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:17.922 [2024-07-13 05:55:09.388464] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:17.922 00:08:17.922 [2024-07-13 05:55:09.388499] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:17.922 00:08:17.922 real 0m0.838s 00:08:17.922 user 0m0.566s 00:08:17.922 sys 0m0.167s 00:08:17.922 05:55:09 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.922 ************************************ 00:08:17.922 END TEST bdev_hello_world 00:08:17.922 05:55:09 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:17.922 ************************************ 00:08:17.922 05:55:09 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:08:17.922 05:55:09 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:08:17.922 05:55:09 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:17.922 05:55:09 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.922 05:55:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:17.922 ************************************ 00:08:17.922 START TEST bdev_bounds 00:08:17.922 ************************************ 00:08:17.922 05:55:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:08:17.922 05:55:09 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=78822 00:08:17.922 05:55:09 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:17.922 05:55:09 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:17.922 Process bdevio pid: 78822 00:08:17.922 05:55:09 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 78822' 00:08:17.922 05:55:09 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 78822 00:08:17.922 05:55:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 78822 ']' 00:08:17.922 05:55:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.922 05:55:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:17.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.922 05:55:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.922 05:55:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:17.922 05:55:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:18.180 [2024-07-13 05:55:09.706156] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:18.180 [2024-07-13 05:55:09.706312] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78822 ] 00:08:18.180 [2024-07-13 05:55:09.841019] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:18.180 [2024-07-13 05:55:09.879258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.180 [2024-07-13 05:55:09.879421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.180 [2024-07-13 05:55:09.879468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.116 05:55:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:19.116 05:55:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:08:19.116 05:55:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:19.116 I/O targets: 00:08:19.116 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:08:19.116 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:08:19.116 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:19.116 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:19.116 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:19.116 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:19.116 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:19.116 00:08:19.116 00:08:19.116 CUnit - A unit testing framework for C - Version 2.1-3 00:08:19.116 http://cunit.sourceforge.net/ 00:08:19.116 00:08:19.116 00:08:19.116 Suite: bdevio tests on: Nvme3n1 00:08:19.116 Test: blockdev write read block ...passed 00:08:19.116 Test: blockdev write zeroes read block ...passed 00:08:19.116 Test: blockdev write zeroes read no split ...passed 00:08:19.116 Test: blockdev write zeroes read split ...passed 00:08:19.116 Test: blockdev write zeroes read split partial ...passed 00:08:19.116 Test: blockdev reset ...[2024-07-13 05:55:10.813802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:08:19.116 passed 00:08:19.117 Test: blockdev write read 8 blocks ...[2024-07-13 05:55:10.816259] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:19.117 passed 00:08:19.117 Test: blockdev write read size > 128k ...passed 00:08:19.117 Test: blockdev write read invalid size ...passed 00:08:19.117 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:19.117 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:19.117 Test: blockdev write read max offset ...passed 00:08:19.117 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:19.117 Test: blockdev writev readv 8 blocks ...passed 00:08:19.117 Test: blockdev writev readv 30 x 1block ...passed 00:08:19.117 Test: blockdev writev readv block ...passed 00:08:19.117 Test: blockdev writev readv size > 128k ...passed 00:08:19.117 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:19.117 Test: blockdev comparev and writev ...[2024-07-13 05:55:10.822745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ad204000 len:0x1000 00:08:19.117 [2024-07-13 05:55:10.822817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:19.117 passed 00:08:19.117 Test: blockdev nvme passthru rw ...passed 00:08:19.117 Test: blockdev nvme passthru vendor specific ...passed 00:08:19.117 Test: blockdev nvme admin passthru ...[2024-07-13 05:55:10.823675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:19.117 [2024-07-13 05:55:10.823725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:19.117 passed 00:08:19.117 Test: blockdev copy ...passed 00:08:19.117 Suite: bdevio tests on: Nvme2n3 00:08:19.117 Test: blockdev write read block ...passed 00:08:19.117 Test: blockdev write zeroes read block ...passed 00:08:19.117 Test: blockdev write zeroes read no split ...passed 00:08:19.117 Test: blockdev write zeroes read split ...passed 00:08:19.117 Test: blockdev write zeroes read split partial ...passed 00:08:19.117 Test: blockdev reset ...[2024-07-13 05:55:10.836881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:19.117 passed 00:08:19.117 Test: blockdev write read 8 blocks ...[2024-07-13 05:55:10.839600] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:19.117 passed 00:08:19.117 Test: blockdev write read size > 128k ...passed 00:08:19.117 Test: blockdev write read invalid size ...passed 00:08:19.117 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:19.117 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:19.117 Test: blockdev write read max offset ...passed 00:08:19.117 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:19.117 Test: blockdev writev readv 8 blocks ...passed 00:08:19.376 Test: blockdev writev readv 30 x 1block ...passed 00:08:19.376 Test: blockdev writev readv block ...passed 00:08:19.376 Test: blockdev writev readv size > 128k ...passed 00:08:19.376 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:19.376 Test: blockdev comparev and writev ...[2024-07-13 05:55:10.845960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e723d000 len:0x1000 00:08:19.376 [2024-07-13 05:55:10.846033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:19.376 passed 00:08:19.376 Test: blockdev nvme passthru rw ...passed 00:08:19.376 Test: blockdev nvme passthru vendor specific ...passed 00:08:19.376 Test: blockdev nvme admin passthru ...[2024-07-13 05:55:10.846873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:19.376 [2024-07-13 05:55:10.846921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:19.376 passed 00:08:19.376 Test: blockdev copy ...passed 00:08:19.376 Suite: bdevio tests on: Nvme2n2 00:08:19.376 Test: blockdev write read block ...passed 00:08:19.376 Test: blockdev write zeroes read block ...passed 00:08:19.376 Test: blockdev write zeroes read no split ...passed 00:08:19.376 Test: blockdev write zeroes read split ...passed 00:08:19.376 Test: blockdev write zeroes read split partial ...passed 00:08:19.376 Test: blockdev reset ...[2024-07-13 05:55:10.858934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:19.376 passed 00:08:19.376 Test: blockdev write read 8 blocks ...[2024-07-13 05:55:10.861436] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:19.376 passed 00:08:19.376 Test: blockdev write read size > 128k ...passed 00:08:19.376 Test: blockdev write read invalid size ...passed 00:08:19.376 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:19.376 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:19.376 Test: blockdev write read max offset ...passed 00:08:19.376 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:19.376 Test: blockdev writev readv 8 blocks ...passed 00:08:19.376 Test: blockdev writev readv 30 x 1block ...passed 00:08:19.376 Test: blockdev writev readv block ...passed 00:08:19.376 Test: blockdev writev readv size > 128k ...passed 00:08:19.376 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:19.376 Test: blockdev comparev and writev ...[2024-07-13 05:55:10.868153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e7239000 len:0x1000 00:08:19.376 [2024-07-13 05:55:10.868265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:19.376 passed 00:08:19.376 Test: blockdev nvme passthru rw ...passed 00:08:19.376 Test: blockdev nvme passthru vendor specific ...passed 00:08:19.376 Test: blockdev nvme admin passthru ...[2024-07-13 05:55:10.869075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:19.376 [2024-07-13 05:55:10.869128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:19.376 passed 00:08:19.376 Test: blockdev copy ...passed 00:08:19.376 Suite: bdevio tests on: Nvme2n1 00:08:19.376 Test: blockdev write read block ...passed 00:08:19.376 Test: blockdev write zeroes read block ...passed 00:08:19.376 Test: blockdev write zeroes read no split ...passed 00:08:19.376 Test: blockdev write zeroes read split ...passed 00:08:19.376 Test: blockdev write zeroes read split partial ...passed 00:08:19.376 Test: blockdev reset ...[2024-07-13 05:55:10.882090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:19.376 passed 00:08:19.376 Test: blockdev write read 8 blocks ...[2024-07-13 05:55:10.884278] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:19.376 passed 00:08:19.376 Test: blockdev write read size > 128k ...passed 00:08:19.376 Test: blockdev write read invalid size ...passed 00:08:19.376 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:19.376 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:19.376 Test: blockdev write read max offset ...passed 00:08:19.376 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:19.376 Test: blockdev writev readv 8 blocks ...passed 00:08:19.376 Test: blockdev writev readv 30 x 1block ...passed 00:08:19.376 Test: blockdev writev readv block ...passed 00:08:19.376 Test: blockdev writev readv size > 128k ...passed 00:08:19.376 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:19.376 Test: blockdev comparev and writev ...[2024-07-13 05:55:10.890895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e7235000 len:0x1000 00:08:19.376 [2024-07-13 05:55:10.890963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:19.376 passed 00:08:19.376 Test: blockdev nvme passthru rw ...passed 00:08:19.376 Test: blockdev nvme passthru vendor specific ...[2024-07-13 05:55:10.891763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:19.376 passed 00:08:19.376 Test: blockdev nvme admin passthru ...[2024-07-13 05:55:10.891818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:19.376 passed 00:08:19.376 Test: blockdev copy ...passed 00:08:19.376 Suite: bdevio tests on: Nvme1n1 00:08:19.376 Test: blockdev write read block ...passed 00:08:19.376 Test: blockdev write zeroes read block ...passed 00:08:19.376 Test: blockdev write zeroes read no split ...passed 00:08:19.376 Test: blockdev write zeroes read split ...passed 00:08:19.376 Test: blockdev write zeroes read split partial ...passed 00:08:19.376 Test: blockdev reset ...[2024-07-13 05:55:10.903426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:19.376 passed 00:08:19.376 Test: blockdev write read 8 blocks ...[2024-07-13 05:55:10.905289] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:19.376 passed 00:08:19.376 Test: blockdev write read size > 128k ...passed 00:08:19.376 Test: blockdev write read invalid size ...passed 00:08:19.376 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:19.376 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:19.376 Test: blockdev write read max offset ...passed 00:08:19.376 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:19.376 Test: blockdev writev readv 8 blocks ...passed 00:08:19.376 Test: blockdev writev readv 30 x 1block ...passed 00:08:19.376 Test: blockdev writev readv block ...passed 00:08:19.376 Test: blockdev writev readv size > 128k ...passed 00:08:19.376 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:19.376 Test: blockdev comparev and writev ...[2024-07-13 05:55:10.911458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b8e0e000 len:0x1000 00:08:19.376 [2024-07-13 05:55:10.911525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:19.376 passed 00:08:19.376 Test: blockdev nvme passthru rw ...passed 00:08:19.376 Test: blockdev nvme passthru vendor specific ...passed 00:08:19.376 Test: blockdev nvme admin passthru ...[2024-07-13 05:55:10.912473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:19.376 [2024-07-13 05:55:10.912534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:19.376 passed 00:08:19.376 Test: blockdev copy ...passed 00:08:19.376 Suite: bdevio tests on: Nvme0n1p2 00:08:19.376 Test: blockdev write read block ...passed 00:08:19.376 Test: blockdev write zeroes read block ...passed 00:08:19.376 Test: blockdev write zeroes read no split ...passed 00:08:19.376 Test: blockdev write zeroes read split ...passed 00:08:19.376 Test: blockdev write zeroes read split partial ...passed 00:08:19.376 Test: blockdev reset ...[2024-07-13 05:55:10.926806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:19.377 passed 00:08:19.377 Test: blockdev write read 8 blocks ...[2024-07-13 05:55:10.928792] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:19.377 passed 00:08:19.377 Test: blockdev write read size > 128k ...passed 00:08:19.377 Test: blockdev write read invalid size ...passed 00:08:19.377 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:19.377 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:19.377 Test: blockdev write read max offset ...passed 00:08:19.377 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:19.377 Test: blockdev writev readv 8 blocks ...passed 00:08:19.377 Test: blockdev writev readv 30 x 1block ...passed 00:08:19.377 Test: blockdev writev readv block ...passed 00:08:19.377 Test: blockdev writev readv size > 128k ...passed 00:08:19.377 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:19.377 Test: blockdev comparev and writev ...passed 00:08:19.377 Test: blockdev nvme passthru rw ...passed 00:08:19.377 Test: blockdev nvme passthru vendor specific ...passed 00:08:19.377 Test: blockdev nvme admin passthru ...passed 00:08:19.377 Test: blockdev copy ...[2024-07-13 05:55:10.934590] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:08:19.377 separate metadata which is not supported yet. 00:08:19.377 passed 00:08:19.377 Suite: bdevio tests on: Nvme0n1p1 00:08:19.377 Test: blockdev write read block ...passed 00:08:19.377 Test: blockdev write zeroes read block ...passed 00:08:19.377 Test: blockdev write zeroes read no split ...passed 00:08:19.377 Test: blockdev write zeroes read split ...passed 00:08:19.377 Test: blockdev write zeroes read split partial ...passed 00:08:19.377 Test: blockdev reset ...[2024-07-13 05:55:10.946672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:19.377 passed 00:08:19.377 Test: blockdev write read 8 blocks ...[2024-07-13 05:55:10.948706] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:19.377 passed 00:08:19.377 Test: blockdev write read size > 128k ...passed 00:08:19.377 Test: blockdev write read invalid size ...passed 00:08:19.377 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:19.377 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:19.377 Test: blockdev write read max offset ...passed 00:08:19.377 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:19.377 Test: blockdev writev readv 8 blocks ...passed 00:08:19.377 Test: blockdev writev readv 30 x 1block ...passed 00:08:19.377 Test: blockdev writev readv block ...passed 00:08:19.377 Test: blockdev writev readv size > 128k ...passed 00:08:19.377 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:19.377 Test: blockdev comparev and writev ...passed 00:08:19.377 Test: blockdev nvme passthru rw ...passed 00:08:19.377 Test: blockdev nvme passthru vendor specific ...passed 00:08:19.377 Test: blockdev nvme admin passthru ...passed 00:08:19.377 Test: blockdev copy ...[2024-07-13 05:55:10.954492] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:08:19.377 separate metadata which is not supported yet. 00:08:19.377 passed 00:08:19.377 00:08:19.377 Run Summary: Type Total Ran Passed Failed Inactive 00:08:19.377 suites 7 7 n/a 0 0 00:08:19.377 tests 161 161 161 0 0 00:08:19.377 asserts 1006 1006 1006 0 n/a 00:08:19.377 00:08:19.377 Elapsed time = 0.357 seconds 00:08:19.377 0 00:08:19.377 05:55:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 78822 00:08:19.377 05:55:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 78822 ']' 00:08:19.377 05:55:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 78822 00:08:19.377 05:55:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:08:19.377 05:55:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:19.377 05:55:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78822 00:08:19.377 05:55:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:19.377 05:55:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:19.377 05:55:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78822' 00:08:19.377 killing process with pid 78822 00:08:19.377 05:55:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@967 -- # kill 78822 00:08:19.377 05:55:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # wait 78822 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:08:19.636 00:08:19.636 real 0m1.551s 00:08:19.636 user 0m4.077s 00:08:19.636 sys 0m0.301s 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:19.636 ************************************ 00:08:19.636 END TEST bdev_bounds 00:08:19.636 ************************************ 00:08:19.636 05:55:11 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:08:19.636 05:55:11 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:19.636 05:55:11 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:19.636 05:55:11 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.636 05:55:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:19.636 ************************************ 00:08:19.636 START TEST bdev_nbd 00:08:19.636 ************************************ 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=7 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=7 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:08:19.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=78865 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 78865 /var/tmp/spdk-nbd.sock 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 78865 ']' 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:19.636 05:55:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:19.636 [2024-07-13 05:55:11.340564] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:19.636 [2024-07-13 05:55:11.340755] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.894 [2024-07-13 05:55:11.489790] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.894 [2024-07-13 05:55:11.525749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:20.830 1+0 records in 00:08:20.830 1+0 records out 00:08:20.830 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000778611 s, 5.3 MB/s 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:20.830 05:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:08:21.088 05:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:21.088 05:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:21.088 05:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:21.088 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:21.088 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:21.088 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:21.088 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:21.088 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:21.088 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:21.088 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:21.088 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:21.088 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:21.088 1+0 records in 00:08:21.088 1+0 records out 00:08:21.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000583628 s, 7.0 MB/s 00:08:21.088 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:21.088 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:21.088 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:21.349 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:21.349 05:55:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:21.349 05:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:21.349 05:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:21.349 05:55:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:21.349 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:21.349 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:21.349 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:21.349 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:08:21.349 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:21.349 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:21.349 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:21.349 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:08:21.349 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:21.349 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:21.349 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:21.349 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:21.349 1+0 records in 00:08:21.349 1+0 records out 00:08:21.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000689678 s, 5.9 MB/s 00:08:21.349 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:21.610 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:21.610 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:21.610 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:21.610 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:21.610 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:21.610 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:21.610 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:21.610 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:21.610 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:21.610 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:21.610 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:08:21.610 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:21.610 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:21.610 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:21.610 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:08:21.610 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:21.610 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:21.610 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:21.610 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:21.610 1+0 records in 00:08:21.610 1+0 records out 00:08:21.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000701591 s, 5.8 MB/s 00:08:21.610 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:21.610 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:21.610 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:21.610 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:21.610 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:21.610 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:21.610 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:21.610 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:22.176 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:22.176 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:22.176 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:22.176 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:08:22.176 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:22.176 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:22.176 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:22.176 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:08:22.176 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:22.176 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:22.176 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:22.176 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:22.176 1+0 records in 00:08:22.176 1+0 records out 00:08:22.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510473 s, 8.0 MB/s 00:08:22.176 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:22.176 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:22.176 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:22.176 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:22.176 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:22.176 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:22.176 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:22.176 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:22.435 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:22.435 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:22.435 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:22.435 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:08:22.435 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:22.435 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:22.435 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:22.435 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:08:22.435 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:22.435 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:22.435 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:22.435 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:22.435 1+0 records in 00:08:22.435 1+0 records out 00:08:22.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000574195 s, 7.1 MB/s 00:08:22.435 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:22.435 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:22.435 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:22.435 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:22.435 05:55:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:22.435 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:22.435 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:22.435 05:55:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:22.694 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:22.694 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:22.694 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:22.694 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:08:22.694 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:22.694 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:22.694 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:22.694 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:08:22.694 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:22.694 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:22.694 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:22.694 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:22.694 1+0 records in 00:08:22.694 1+0 records out 00:08:22.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000848625 s, 4.8 MB/s 00:08:22.694 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:22.694 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:22.694 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:22.694 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:22.694 05:55:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:22.694 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:22.694 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:22.694 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:22.953 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:22.953 { 00:08:22.953 "nbd_device": "/dev/nbd0", 00:08:22.953 "bdev_name": "Nvme0n1p1" 00:08:22.953 }, 00:08:22.953 { 00:08:22.953 "nbd_device": "/dev/nbd1", 00:08:22.953 "bdev_name": "Nvme0n1p2" 00:08:22.953 }, 00:08:22.953 { 00:08:22.953 "nbd_device": "/dev/nbd2", 00:08:22.953 "bdev_name": "Nvme1n1" 00:08:22.953 }, 00:08:22.953 { 00:08:22.953 "nbd_device": "/dev/nbd3", 00:08:22.953 "bdev_name": "Nvme2n1" 00:08:22.953 }, 00:08:22.953 { 00:08:22.953 "nbd_device": "/dev/nbd4", 00:08:22.953 "bdev_name": "Nvme2n2" 00:08:22.953 }, 00:08:22.953 { 00:08:22.953 "nbd_device": "/dev/nbd5", 00:08:22.953 "bdev_name": "Nvme2n3" 00:08:22.953 }, 00:08:22.953 { 00:08:22.953 "nbd_device": "/dev/nbd6", 00:08:22.953 "bdev_name": "Nvme3n1" 00:08:22.953 } 00:08:22.953 ]' 00:08:22.953 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:22.953 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:22.953 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:22.953 { 00:08:22.953 "nbd_device": "/dev/nbd0", 00:08:22.953 "bdev_name": "Nvme0n1p1" 00:08:22.953 }, 00:08:22.953 { 00:08:22.953 "nbd_device": "/dev/nbd1", 00:08:22.953 "bdev_name": "Nvme0n1p2" 00:08:22.953 }, 00:08:22.953 { 00:08:22.953 "nbd_device": "/dev/nbd2", 00:08:22.953 "bdev_name": "Nvme1n1" 00:08:22.953 }, 00:08:22.953 { 00:08:22.953 "nbd_device": "/dev/nbd3", 00:08:22.953 "bdev_name": "Nvme2n1" 00:08:22.953 }, 00:08:22.953 { 00:08:22.953 "nbd_device": "/dev/nbd4", 00:08:22.953 "bdev_name": "Nvme2n2" 00:08:22.953 }, 00:08:22.953 { 00:08:22.953 "nbd_device": "/dev/nbd5", 00:08:22.953 "bdev_name": "Nvme2n3" 00:08:22.953 }, 00:08:22.953 { 00:08:22.953 "nbd_device": "/dev/nbd6", 00:08:22.953 "bdev_name": "Nvme3n1" 00:08:22.953 } 00:08:22.953 ]' 00:08:22.953 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:22.953 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:22.953 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:22.953 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:22.953 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:22.953 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:22.953 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:23.211 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:23.211 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:23.211 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:23.211 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:23.211 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:23.211 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:23.211 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:23.211 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:23.211 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:23.211 05:55:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:23.470 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:23.470 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:23.470 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:23.470 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:23.470 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:23.470 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:23.470 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:23.470 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:23.470 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:23.470 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:23.728 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:23.728 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:23.728 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:23.728 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:23.728 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:23.728 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:23.728 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:23.728 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:23.728 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:23.728 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:23.986 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:23.986 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:23.986 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:23.986 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:23.986 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:23.986 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:23.986 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:23.986 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:23.986 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:23.986 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:24.244 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:24.244 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:24.244 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:24.244 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:24.244 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:24.244 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:24.244 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:24.244 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:24.244 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:24.244 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:24.503 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:24.503 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:24.503 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:24.503 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:24.503 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:24.503 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:24.503 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:24.503 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:24.503 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:24.503 05:55:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:24.503 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:24.503 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:24.503 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:24.503 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:24.503 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:24.503 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:24.503 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:24.503 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:24.503 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:24.503 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.503 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:24.762 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:24.762 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:24.762 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:25.021 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:25.021 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:25.021 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:25.021 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:25.021 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:25.021 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:25.021 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:25.021 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:25.021 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:25.021 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:25.021 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.021 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:25.021 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:25.021 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:25.021 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:25.021 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:25.021 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.021 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:25.021 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:25.021 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:25.021 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:25.021 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:25.021 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:25.021 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:25.021 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:08:25.280 /dev/nbd0 00:08:25.280 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:25.280 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:25.280 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:25.280 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:25.280 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:25.280 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:25.280 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:25.280 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:25.280 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:25.280 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:25.280 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:25.280 1+0 records in 00:08:25.280 1+0 records out 00:08:25.280 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000909817 s, 4.5 MB/s 00:08:25.280 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.280 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:25.280 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.280 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:25.280 05:55:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:25.280 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:25.280 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:25.280 05:55:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:08:25.539 /dev/nbd1 00:08:25.539 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:25.539 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:25.539 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:25.539 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:25.539 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:25.539 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:25.539 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:25.539 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:25.539 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:25.539 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:25.539 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:25.539 1+0 records in 00:08:25.539 1+0 records out 00:08:25.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498611 s, 8.2 MB/s 00:08:25.539 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.539 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:25.539 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.539 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:25.539 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:25.539 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:25.539 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:25.539 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:08:25.797 /dev/nbd10 00:08:25.797 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:25.797 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:25.797 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:08:25.797 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:25.797 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:25.797 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:25.798 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:08:25.798 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:25.798 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:25.798 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:25.798 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:25.798 1+0 records in 00:08:25.798 1+0 records out 00:08:25.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519024 s, 7.9 MB/s 00:08:25.798 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.798 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:25.798 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.798 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:25.798 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:25.798 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:25.798 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:25.798 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:26.057 /dev/nbd11 00:08:26.057 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:26.057 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:26.057 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:08:26.057 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:26.057 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:26.057 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:26.057 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:08:26.057 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:26.057 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:26.057 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:26.057 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:26.057 1+0 records in 00:08:26.057 1+0 records out 00:08:26.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000649073 s, 6.3 MB/s 00:08:26.057 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.057 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:26.057 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.057 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:26.057 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:26.057 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:26.057 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:26.057 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:26.315 /dev/nbd12 00:08:26.316 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:26.316 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:26.316 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:08:26.316 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:26.316 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:26.316 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:26.316 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:08:26.316 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:26.316 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:26.316 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:26.316 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:26.316 1+0 records in 00:08:26.316 1+0 records out 00:08:26.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000649765 s, 6.3 MB/s 00:08:26.316 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.316 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:26.316 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.316 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:26.316 05:55:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:26.316 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:26.316 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:26.316 05:55:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:26.574 /dev/nbd13 00:08:26.574 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:26.574 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:26.574 05:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:08:26.574 05:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:26.574 05:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:26.574 05:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:26.574 05:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:08:26.574 05:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:26.574 05:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:26.574 05:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:26.574 05:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:26.574 1+0 records in 00:08:26.574 1+0 records out 00:08:26.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000767094 s, 5.3 MB/s 00:08:26.574 05:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.574 05:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:26.574 05:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.574 05:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:26.574 05:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:26.574 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:26.574 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:26.574 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:26.833 /dev/nbd14 00:08:26.833 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:26.833 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:26.833 05:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:08:26.833 05:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:08:26.833 05:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:26.833 05:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:26.833 05:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:08:26.833 05:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:08:26.833 05:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:26.833 05:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:26.833 05:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:26.833 1+0 records in 00:08:26.833 1+0 records out 00:08:26.833 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000967587 s, 4.2 MB/s 00:08:26.833 05:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.833 05:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:08:26.833 05:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.833 05:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:26.833 05:55:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:08:26.833 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:26.833 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:26.833 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:26.833 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.833 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:27.092 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:27.092 { 00:08:27.092 "nbd_device": "/dev/nbd0", 00:08:27.092 "bdev_name": "Nvme0n1p1" 00:08:27.092 }, 00:08:27.092 { 00:08:27.092 "nbd_device": "/dev/nbd1", 00:08:27.092 "bdev_name": "Nvme0n1p2" 00:08:27.092 }, 00:08:27.092 { 00:08:27.092 "nbd_device": "/dev/nbd10", 00:08:27.092 "bdev_name": "Nvme1n1" 00:08:27.092 }, 00:08:27.092 { 00:08:27.092 "nbd_device": "/dev/nbd11", 00:08:27.092 "bdev_name": "Nvme2n1" 00:08:27.092 }, 00:08:27.092 { 00:08:27.092 "nbd_device": "/dev/nbd12", 00:08:27.092 "bdev_name": "Nvme2n2" 00:08:27.092 }, 00:08:27.092 { 00:08:27.092 "nbd_device": "/dev/nbd13", 00:08:27.092 "bdev_name": "Nvme2n3" 00:08:27.092 }, 00:08:27.092 { 00:08:27.092 "nbd_device": "/dev/nbd14", 00:08:27.092 "bdev_name": "Nvme3n1" 00:08:27.092 } 00:08:27.092 ]' 00:08:27.092 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:27.092 { 00:08:27.092 "nbd_device": "/dev/nbd0", 00:08:27.092 "bdev_name": "Nvme0n1p1" 00:08:27.092 }, 00:08:27.092 { 00:08:27.092 "nbd_device": "/dev/nbd1", 00:08:27.092 "bdev_name": "Nvme0n1p2" 00:08:27.092 }, 00:08:27.092 { 00:08:27.092 "nbd_device": "/dev/nbd10", 00:08:27.092 "bdev_name": "Nvme1n1" 00:08:27.092 }, 00:08:27.092 { 00:08:27.092 "nbd_device": "/dev/nbd11", 00:08:27.092 "bdev_name": "Nvme2n1" 00:08:27.092 }, 00:08:27.092 { 00:08:27.092 "nbd_device": "/dev/nbd12", 00:08:27.092 "bdev_name": "Nvme2n2" 00:08:27.092 }, 00:08:27.092 { 00:08:27.092 "nbd_device": "/dev/nbd13", 00:08:27.092 "bdev_name": "Nvme2n3" 00:08:27.092 }, 00:08:27.092 { 00:08:27.092 "nbd_device": "/dev/nbd14", 00:08:27.092 "bdev_name": "Nvme3n1" 00:08:27.092 } 00:08:27.092 ]' 00:08:27.093 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:27.093 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:27.093 /dev/nbd1 00:08:27.093 /dev/nbd10 00:08:27.093 /dev/nbd11 00:08:27.093 /dev/nbd12 00:08:27.093 /dev/nbd13 00:08:27.093 /dev/nbd14' 00:08:27.093 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:27.093 /dev/nbd1 00:08:27.093 /dev/nbd10 00:08:27.093 /dev/nbd11 00:08:27.093 /dev/nbd12 00:08:27.093 /dev/nbd13 00:08:27.093 /dev/nbd14' 00:08:27.093 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:27.093 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:27.093 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:27.093 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:27.093 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:27.093 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:27.093 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:27.093 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:27.093 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:27.093 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:27.093 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:27.093 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:27.093 256+0 records in 00:08:27.093 256+0 records out 00:08:27.093 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00694273 s, 151 MB/s 00:08:27.093 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:27.093 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:27.352 256+0 records in 00:08:27.352 256+0 records out 00:08:27.352 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174264 s, 6.0 MB/s 00:08:27.352 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:27.352 05:55:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:27.611 256+0 records in 00:08:27.611 256+0 records out 00:08:27.611 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16905 s, 6.2 MB/s 00:08:27.611 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:27.611 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:27.870 256+0 records in 00:08:27.870 256+0 records out 00:08:27.870 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.197387 s, 5.3 MB/s 00:08:27.870 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:27.870 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:27.870 256+0 records in 00:08:27.870 256+0 records out 00:08:27.870 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174026 s, 6.0 MB/s 00:08:27.870 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:27.870 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:28.129 256+0 records in 00:08:28.129 256+0 records out 00:08:28.129 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.172397 s, 6.1 MB/s 00:08:28.129 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:28.129 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:28.388 256+0 records in 00:08:28.388 256+0 records out 00:08:28.388 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144819 s, 7.2 MB/s 00:08:28.388 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:28.388 05:55:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:28.388 256+0 records in 00:08:28.388 256+0 records out 00:08:28.388 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.186613 s, 5.6 MB/s 00:08:28.388 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:28.388 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:28.388 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:28.388 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:28.388 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:28.388 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:28.388 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:28.388 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:28.389 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:28.389 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:28.389 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:28.389 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:28.389 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:28.389 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:28.389 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:28.389 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:28.389 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:28.389 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:28.389 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:28.389 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:28.389 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:28.389 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:28.389 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:28.389 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.389 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:28.389 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:28.389 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:28.389 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:28.389 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:28.957 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:28.957 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:28.957 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:28.957 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:28.957 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:28.957 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:28.957 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:28.957 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:28.957 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:28.957 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:28.957 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:28.957 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:28.957 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:28.957 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:28.957 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:28.957 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:28.957 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:28.957 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:28.957 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:28.957 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:29.525 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:29.525 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:29.525 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:29.525 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:29.525 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:29.525 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:29.525 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:29.525 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:29.525 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:29.526 05:55:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:29.526 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:29.526 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:29.526 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:29.526 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:29.526 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:29.526 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:29.526 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:29.526 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:29.526 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:29.526 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:29.793 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:29.793 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:29.793 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:29.793 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:29.793 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:29.793 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:29.793 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:29.793 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:29.793 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:29.793 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:30.063 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:30.063 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:30.063 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:30.063 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.063 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.063 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:30.063 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:30.063 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.063 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.063 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:30.329 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:30.329 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:30.329 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:30.329 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.329 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.329 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:30.329 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:30.329 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.329 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:30.329 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.329 05:55:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:30.588 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:30.588 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:30.588 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:30.588 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:30.588 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:30.588 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:30.588 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:30.588 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:30.588 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:30.588 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:30.588 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:30.588 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:30.588 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:30.588 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.588 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:30.588 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:30.588 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:30.588 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:30.847 malloc_lvol_verify 00:08:30.847 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:31.106 8b80b8d1-3a34-44c3-8063-38832d2462c7 00:08:31.106 05:55:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:31.364 544db49b-8a50-45d0-97f2-9808180a8085 00:08:31.364 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:31.623 /dev/nbd0 00:08:31.623 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:31.623 mke2fs 1.46.5 (30-Dec-2021) 00:08:31.623 Discarding device blocks: 0/4096 done 00:08:31.623 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:31.623 00:08:31.623 Allocating group tables: 0/1 done 00:08:31.623 Writing inode tables: 0/1 done 00:08:31.623 Creating journal (1024 blocks): done 00:08:31.623 Writing superblocks and filesystem accounting information: 0/1 done 00:08:31.623 00:08:31.623 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:31.623 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:31.623 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.623 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:31.623 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:31.623 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:31.623 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:31.623 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:31.882 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:31.882 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:31.882 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:31.882 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:31.882 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:31.882 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:31.882 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:31.882 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:31.882 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:31.882 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:31.882 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 78865 00:08:31.882 05:55:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 78865 ']' 00:08:31.882 05:55:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 78865 00:08:31.882 05:55:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:08:31.882 05:55:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:31.882 05:55:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78865 00:08:31.882 05:55:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:31.882 05:55:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:31.882 killing process with pid 78865 00:08:31.883 05:55:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78865' 00:08:31.883 05:55:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@967 -- # kill 78865 00:08:31.883 05:55:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # wait 78865 00:08:32.141 05:55:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:08:32.141 00:08:32.141 real 0m12.548s 00:08:32.141 user 0m18.146s 00:08:32.141 sys 0m4.262s 00:08:32.141 05:55:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:32.141 05:55:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:32.141 ************************************ 00:08:32.141 END TEST bdev_nbd 00:08:32.141 ************************************ 00:08:32.141 05:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:08:32.141 05:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:08:32.141 05:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:08:32.141 skipping fio tests on NVMe due to multi-ns failures. 00:08:32.141 05:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:08:32.142 05:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:32.142 05:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:32.142 05:55:23 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:32.142 05:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:08:32.142 05:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.142 05:55:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:32.142 ************************************ 00:08:32.142 START TEST bdev_verify 00:08:32.142 ************************************ 00:08:32.142 05:55:23 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:32.401 [2024-07-13 05:55:23.928250] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:32.401 [2024-07-13 05:55:23.928401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79299 ] 00:08:32.401 [2024-07-13 05:55:24.066388] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:32.401 [2024-07-13 05:55:24.102868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.401 [2024-07-13 05:55:24.102930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.968 Running I/O for 5 seconds... 00:08:38.236 00:08:38.236 Latency(us) 00:08:38.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.236 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:38.236 Verification LBA range: start 0x0 length 0x5e800 00:08:38.236 Nvme0n1p1 : 5.09 1321.17 5.16 0.00 0.00 96310.62 13285.93 82456.20 00:08:38.236 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:38.236 Verification LBA range: start 0x5e800 length 0x5e800 00:08:38.236 Nvme0n1p1 : 5.05 1266.63 4.95 0.00 0.00 100525.90 21328.99 93418.59 00:08:38.236 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:38.236 Verification LBA range: start 0x0 length 0x5e7ff 00:08:38.236 Nvme0n1p2 : 5.09 1320.58 5.16 0.00 0.00 96147.18 12988.04 79119.83 00:08:38.236 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:38.236 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:08:38.236 Nvme0n1p2 : 5.09 1268.85 4.96 0.00 0.00 100131.16 9532.51 91512.09 00:08:38.236 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:38.236 Verification LBA range: start 0x0 length 0xa0000 00:08:38.236 Nvme1n1 : 5.11 1328.42 5.19 0.00 0.00 95838.89 13702.98 76260.07 00:08:38.236 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:38.236 Verification LBA range: start 0xa0000 length 0xa0000 00:08:38.236 Nvme1n1 : 5.11 1277.21 4.99 0.00 0.00 99509.40 12511.42 87222.46 00:08:38.236 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:38.236 Verification LBA range: start 0x0 length 0x80000 00:08:38.236 Nvme2n1 : 5.11 1327.85 5.19 0.00 0.00 95702.91 13881.72 74830.20 00:08:38.236 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:38.236 Verification LBA range: start 0x80000 length 0x80000 00:08:38.236 Nvme2n1 : 5.12 1276.21 4.99 0.00 0.00 99317.16 14775.39 82932.83 00:08:38.236 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:38.236 Verification LBA range: start 0x0 length 0x80000 00:08:38.236 Nvme2n2 : 5.11 1327.27 5.18 0.00 0.00 95575.41 14656.23 76260.07 00:08:38.237 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:38.237 Verification LBA range: start 0x80000 length 0x80000 00:08:38.237 Nvme2n2 : 5.12 1275.33 4.98 0.00 0.00 99147.75 16443.58 85315.96 00:08:38.237 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:38.237 Verification LBA range: start 0x0 length 0x80000 00:08:38.237 Nvme2n3 : 5.12 1326.20 5.18 0.00 0.00 95450.07 16562.73 78643.20 00:08:38.237 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:38.237 Verification LBA range: start 0x80000 length 0x80000 00:08:38.237 Nvme2n3 : 5.12 1274.48 4.98 0.00 0.00 99003.49 16681.89 88652.33 00:08:38.237 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:38.237 Verification LBA range: start 0x0 length 0x20000 00:08:38.237 Nvme3n1 : 5.12 1325.27 5.18 0.00 0.00 95311.52 11021.96 81979.58 00:08:38.237 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:38.237 Verification LBA range: start 0x20000 length 0x20000 00:08:38.237 Nvme3n1 : 5.12 1273.99 4.98 0.00 0.00 98887.70 14894.55 92941.96 00:08:38.237 =================================================================================================================== 00:08:38.237 Total : 18189.47 71.05 0.00 0.00 97592.93 9532.51 93418.59 00:08:38.494 00:08:38.494 real 0m6.254s 00:08:38.494 user 0m11.690s 00:08:38.494 sys 0m0.232s 00:08:38.494 05:55:30 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:38.494 ************************************ 00:08:38.494 END TEST bdev_verify 00:08:38.494 05:55:30 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:38.494 ************************************ 00:08:38.494 05:55:30 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:08:38.494 05:55:30 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:38.494 05:55:30 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:08:38.494 05:55:30 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.494 05:55:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:38.494 ************************************ 00:08:38.494 START TEST bdev_verify_big_io 00:08:38.494 ************************************ 00:08:38.494 05:55:30 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:38.753 [2024-07-13 05:55:30.235245] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:38.753 [2024-07-13 05:55:30.235435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79386 ] 00:08:38.753 [2024-07-13 05:55:30.384621] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:38.753 [2024-07-13 05:55:30.428984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.753 [2024-07-13 05:55:30.429015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.320 Running I/O for 5 seconds... 00:08:45.882 00:08:45.882 Latency(us) 00:08:45.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.882 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:45.882 Verification LBA range: start 0x0 length 0x5e80 00:08:45.882 Nvme0n1p1 : 5.98 99.25 6.20 0.00 0.00 1203243.07 15252.01 1731103.65 00:08:45.882 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:45.882 Verification LBA range: start 0x5e80 length 0x5e80 00:08:45.882 Nvme0n1p1 : 5.83 109.82 6.86 0.00 0.00 1114266.44 20733.21 1273543.21 00:08:45.882 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:45.882 Verification LBA range: start 0x0 length 0x5e7f 00:08:45.882 Nvme0n1p2 : 5.98 103.05 6.44 0.00 0.00 1140927.60 32887.16 1753981.67 00:08:45.882 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:45.882 Verification LBA range: start 0x5e7f length 0x5e7f 00:08:45.882 Nvme0n1p2 : 6.05 101.70 6.36 0.00 0.00 1164666.46 62914.56 1464193.40 00:08:45.882 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:45.882 Verification LBA range: start 0x0 length 0xa000 00:08:45.882 Nvme1n1 : 6.05 109.43 6.84 0.00 0.00 1039067.79 54096.99 1220161.16 00:08:45.882 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:45.882 Verification LBA range: start 0xa000 length 0xa000 00:08:45.882 Nvme1n1 : 6.05 78.69 4.92 0.00 0.00 1454448.61 131548.63 1998013.91 00:08:45.882 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:45.883 Verification LBA range: start 0x0 length 0x8000 00:08:45.883 Nvme2n1 : 6.14 113.31 7.08 0.00 0.00 990522.71 39083.29 1830241.75 00:08:45.883 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:45.883 Verification LBA range: start 0x8000 length 0x8000 00:08:45.883 Nvme2n1 : 6.05 115.82 7.24 0.00 0.00 974348.89 70063.94 1090519.04 00:08:45.883 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:45.883 Verification LBA range: start 0x0 length 0x8000 00:08:45.883 Nvme2n2 : 6.14 112.94 7.06 0.00 0.00 958133.40 39798.23 1860745.77 00:08:45.883 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:45.883 Verification LBA range: start 0x8000 length 0x8000 00:08:45.883 Nvme2n2 : 6.14 120.35 7.52 0.00 0.00 908228.18 37176.79 1121023.07 00:08:45.883 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:45.883 Verification LBA range: start 0x0 length 0x8000 00:08:45.883 Nvme2n3 : 6.15 117.12 7.32 0.00 0.00 896711.21 50760.61 1891249.80 00:08:45.883 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:45.883 Verification LBA range: start 0x8000 length 0x8000 00:08:45.883 Nvme2n3 : 6.14 125.02 7.81 0.00 0.00 852663.23 47185.92 1151527.10 00:08:45.883 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:45.883 Verification LBA range: start 0x0 length 0x2000 00:08:45.883 Nvme3n1 : 6.18 136.83 8.55 0.00 0.00 746912.83 886.23 1906501.82 00:08:45.883 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:45.883 Verification LBA range: start 0x2000 length 0x2000 00:08:45.883 Nvme3n1 : 6.15 135.25 8.45 0.00 0.00 765489.48 2100.13 1182031.13 00:08:45.883 =================================================================================================================== 00:08:45.883 Total : 1578.58 98.66 0.00 0.00 990176.69 886.23 1998013.91 00:08:45.883 00:08:45.883 real 0m7.432s 00:08:45.883 user 0m14.045s 00:08:45.883 sys 0m0.237s 00:08:45.883 05:55:37 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:45.883 05:55:37 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:45.883 ************************************ 00:08:45.883 END TEST bdev_verify_big_io 00:08:45.883 ************************************ 00:08:46.141 05:55:37 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:08:46.141 05:55:37 blockdev_nvme_gpt -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:46.141 05:55:37 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:46.141 05:55:37 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:46.141 05:55:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:46.141 ************************************ 00:08:46.141 START TEST bdev_write_zeroes 00:08:46.141 ************************************ 00:08:46.141 05:55:37 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:46.141 [2024-07-13 05:55:37.704852] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:46.141 [2024-07-13 05:55:37.705038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79484 ] 00:08:46.141 [2024-07-13 05:55:37.843438] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.398 [2024-07-13 05:55:37.878730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.655 Running I/O for 1 seconds... 00:08:47.615 00:08:47.615 Latency(us) 00:08:47.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.616 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:47.616 Nvme0n1p1 : 1.02 7130.05 27.85 0.00 0.00 17884.86 12571.00 29312.47 00:08:47.616 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:47.616 Nvme0n1p2 : 1.02 7118.40 27.81 0.00 0.00 17878.69 13047.62 30384.87 00:08:47.616 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:47.616 Nvme1n1 : 1.03 7107.92 27.77 0.00 0.00 17843.08 13643.40 27048.49 00:08:47.616 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:47.616 Nvme2n1 : 1.03 7097.32 27.72 0.00 0.00 17765.10 12451.84 25022.84 00:08:47.616 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:47.616 Nvme2n2 : 1.03 7086.85 27.68 0.00 0.00 17725.52 10307.03 22878.02 00:08:47.616 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:47.616 Nvme2n3 : 1.03 7131.51 27.86 0.00 0.00 17648.32 9651.67 21448.15 00:08:47.616 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:47.616 Nvme3n1 : 1.03 7120.98 27.82 0.00 0.00 17633.81 9115.46 21328.99 00:08:47.616 =================================================================================================================== 00:08:47.616 Total : 49793.02 194.50 0.00 0.00 17768.17 9115.46 30384.87 00:08:47.873 00:08:47.873 real 0m1.918s 00:08:47.873 user 0m1.641s 00:08:47.873 sys 0m0.161s 00:08:47.873 05:55:39 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:47.873 05:55:39 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:47.873 ************************************ 00:08:47.873 END TEST bdev_write_zeroes 00:08:47.873 ************************************ 00:08:47.873 05:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:08:47.873 05:55:39 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:47.873 05:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:47.873 05:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.873 05:55:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:48.132 ************************************ 00:08:48.132 START TEST bdev_json_nonenclosed 00:08:48.132 ************************************ 00:08:48.132 05:55:39 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:48.132 [2024-07-13 05:55:39.696690] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:48.132 [2024-07-13 05:55:39.696888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79526 ] 00:08:48.132 [2024-07-13 05:55:39.846411] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.391 [2024-07-13 05:55:39.885810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.391 [2024-07-13 05:55:39.885942] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:48.391 [2024-07-13 05:55:39.885983] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:48.391 [2024-07-13 05:55:39.886005] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:48.391 00:08:48.391 real 0m0.378s 00:08:48.391 user 0m0.166s 00:08:48.391 sys 0m0.108s 00:08:48.391 05:55:39 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:08:48.391 05:55:39 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:48.391 05:55:39 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:48.391 ************************************ 00:08:48.391 END TEST bdev_json_nonenclosed 00:08:48.391 ************************************ 00:08:48.391 05:55:40 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:08:48.391 05:55:40 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # true 00:08:48.391 05:55:40 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:48.391 05:55:40 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:48.391 05:55:40 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.391 05:55:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:48.391 ************************************ 00:08:48.391 START TEST bdev_json_nonarray 00:08:48.391 ************************************ 00:08:48.391 05:55:40 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:48.649 [2024-07-13 05:55:40.130084] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:48.649 [2024-07-13 05:55:40.130318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79546 ] 00:08:48.649 [2024-07-13 05:55:40.278784] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.649 [2024-07-13 05:55:40.316866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.649 [2024-07-13 05:55:40.317021] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:48.649 [2024-07-13 05:55:40.317057] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:48.649 [2024-07-13 05:55:40.317072] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:48.906 00:08:48.906 real 0m0.377s 00:08:48.906 user 0m0.171s 00:08:48.906 sys 0m0.102s 00:08:48.906 05:55:40 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:08:48.906 05:55:40 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:48.906 05:55:40 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:48.906 ************************************ 00:08:48.906 END TEST bdev_json_nonarray 00:08:48.906 ************************************ 00:08:48.906 05:55:40 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:08:48.906 05:55:40 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # true 00:08:48.906 05:55:40 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:08:48.906 05:55:40 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:08:48.906 05:55:40 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:48.906 05:55:40 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:48.906 05:55:40 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.907 05:55:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:48.907 ************************************ 00:08:48.907 START TEST bdev_gpt_uuid 00:08:48.907 ************************************ 00:08:48.907 05:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1123 -- # bdev_gpt_uuid 00:08:48.907 05:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@614 -- # local bdev 00:08:48.907 05:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:08:48.907 05:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=79577 00:08:48.907 05:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:48.907 05:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:48.907 05:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 79577 00:08:48.907 05:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@829 -- # '[' -z 79577 ']' 00:08:48.907 05:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.907 05:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:48.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.907 05:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.907 05:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:48.907 05:55:40 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:48.907 [2024-07-13 05:55:40.582088] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:48.907 [2024-07-13 05:55:40.582314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79577 ] 00:08:49.165 [2024-07-13 05:55:40.728273] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.165 [2024-07-13 05:55:40.765572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.099 05:55:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:50.099 05:55:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # return 0 00:08:50.099 05:55:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:50.099 05:55:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.099 05:55:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:50.358 Some configs were skipped because the RPC state that can call them passed over. 00:08:50.358 05:55:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.358 05:55:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:08:50.358 05:55:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.358 05:55:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:50.358 05:55:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.358 05:55:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:50.358 05:55:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.358 05:55:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:50.358 05:55:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.358 05:55:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # bdev='[ 00:08:50.358 { 00:08:50.358 "name": "Nvme0n1p1", 00:08:50.358 "aliases": [ 00:08:50.358 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:50.358 ], 00:08:50.358 "product_name": "GPT Disk", 00:08:50.358 "block_size": 4096, 00:08:50.358 "num_blocks": 774144, 00:08:50.358 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:50.358 "md_size": 64, 00:08:50.358 "md_interleave": false, 00:08:50.358 "dif_type": 0, 00:08:50.358 "assigned_rate_limits": { 00:08:50.358 "rw_ios_per_sec": 0, 00:08:50.358 "rw_mbytes_per_sec": 0, 00:08:50.358 "r_mbytes_per_sec": 0, 00:08:50.358 "w_mbytes_per_sec": 0 00:08:50.358 }, 00:08:50.358 "claimed": false, 00:08:50.358 "zoned": false, 00:08:50.358 "supported_io_types": { 00:08:50.358 "read": true, 00:08:50.358 "write": true, 00:08:50.358 "unmap": true, 00:08:50.358 "flush": true, 00:08:50.358 "reset": true, 00:08:50.358 "nvme_admin": false, 00:08:50.358 "nvme_io": false, 00:08:50.358 "nvme_io_md": false, 00:08:50.358 "write_zeroes": true, 00:08:50.358 "zcopy": false, 00:08:50.358 "get_zone_info": false, 00:08:50.358 "zone_management": false, 00:08:50.358 "zone_append": false, 00:08:50.358 "compare": true, 00:08:50.358 "compare_and_write": false, 00:08:50.358 "abort": true, 00:08:50.358 "seek_hole": false, 00:08:50.358 "seek_data": false, 00:08:50.358 "copy": true, 00:08:50.358 "nvme_iov_md": false 00:08:50.358 }, 00:08:50.358 "driver_specific": { 00:08:50.358 "gpt": { 00:08:50.358 "base_bdev": "Nvme0n1", 00:08:50.358 "offset_blocks": 256, 00:08:50.358 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:50.358 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:50.358 "partition_name": "SPDK_TEST_first" 00:08:50.358 } 00:08:50.358 } 00:08:50.358 } 00:08:50.358 ]' 00:08:50.358 05:55:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r length 00:08:50.358 05:55:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:08:50.358 05:55:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:08:50.358 05:55:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:50.358 05:55:41 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:50.358 05:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:50.358 05:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:50.358 05:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.358 05:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:50.358 05:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.358 05:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # bdev='[ 00:08:50.358 { 00:08:50.358 "name": "Nvme0n1p2", 00:08:50.358 "aliases": [ 00:08:50.358 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:50.358 ], 00:08:50.358 "product_name": "GPT Disk", 00:08:50.358 "block_size": 4096, 00:08:50.358 "num_blocks": 774143, 00:08:50.358 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:50.358 "md_size": 64, 00:08:50.358 "md_interleave": false, 00:08:50.358 "dif_type": 0, 00:08:50.358 "assigned_rate_limits": { 00:08:50.358 "rw_ios_per_sec": 0, 00:08:50.358 "rw_mbytes_per_sec": 0, 00:08:50.358 "r_mbytes_per_sec": 0, 00:08:50.358 "w_mbytes_per_sec": 0 00:08:50.358 }, 00:08:50.358 "claimed": false, 00:08:50.358 "zoned": false, 00:08:50.358 "supported_io_types": { 00:08:50.358 "read": true, 00:08:50.358 "write": true, 00:08:50.358 "unmap": true, 00:08:50.358 "flush": true, 00:08:50.358 "reset": true, 00:08:50.358 "nvme_admin": false, 00:08:50.358 "nvme_io": false, 00:08:50.358 "nvme_io_md": false, 00:08:50.358 "write_zeroes": true, 00:08:50.358 "zcopy": false, 00:08:50.358 "get_zone_info": false, 00:08:50.358 "zone_management": false, 00:08:50.358 "zone_append": false, 00:08:50.358 "compare": true, 00:08:50.358 "compare_and_write": false, 00:08:50.358 "abort": true, 00:08:50.358 "seek_hole": false, 00:08:50.358 "seek_data": false, 00:08:50.358 "copy": true, 00:08:50.358 "nvme_iov_md": false 00:08:50.358 }, 00:08:50.358 "driver_specific": { 00:08:50.358 "gpt": { 00:08:50.358 "base_bdev": "Nvme0n1", 00:08:50.358 "offset_blocks": 774400, 00:08:50.358 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:50.358 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:50.358 "partition_name": "SPDK_TEST_second" 00:08:50.358 } 00:08:50.358 } 00:08:50.358 } 00:08:50.358 ]' 00:08:50.358 05:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r length 00:08:50.616 05:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:08:50.617 05:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:08:50.617 05:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:50.617 05:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:50.617 05:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:50.617 05:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@631 -- # killprocess 79577 00:08:50.617 05:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@948 -- # '[' -z 79577 ']' 00:08:50.617 05:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # kill -0 79577 00:08:50.617 05:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # uname 00:08:50.617 05:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:50.617 05:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79577 00:08:50.617 05:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:50.617 05:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:50.617 killing process with pid 79577 00:08:50.617 05:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79577' 00:08:50.617 05:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@967 -- # kill 79577 00:08:50.617 05:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # wait 79577 00:08:50.875 00:08:50.875 real 0m2.047s 00:08:50.875 user 0m2.421s 00:08:50.875 sys 0m0.383s 00:08:50.875 05:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:50.875 05:55:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:50.875 ************************************ 00:08:50.875 END TEST bdev_gpt_uuid 00:08:50.875 ************************************ 00:08:50.875 05:55:42 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:08:50.875 05:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:08:50.875 05:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:08:50.875 05:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@811 -- # cleanup 00:08:50.875 05:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:50.875 05:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:50.875 05:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:50.875 05:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:50.875 05:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:50.875 05:55:42 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:51.441 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:51.441 Waiting for block devices as requested 00:08:51.441 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:51.698 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:51.698 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:51.698 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:56.969 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:56.969 05:55:48 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme1n1 ]] 00:08:56.969 05:55:48 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme1n1 00:08:57.228 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:57.228 /dev/nvme1n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:08:57.228 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:57.228 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:08:57.228 05:55:48 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:57.228 00:08:57.228 real 0m51.299s 00:08:57.228 user 1m5.649s 00:08:57.228 sys 0m8.940s 00:08:57.228 05:55:48 blockdev_nvme_gpt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:57.228 05:55:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:57.228 ************************************ 00:08:57.228 END TEST blockdev_nvme_gpt 00:08:57.228 ************************************ 00:08:57.228 05:55:48 -- common/autotest_common.sh@1142 -- # return 0 00:08:57.228 05:55:48 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:57.228 05:55:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:57.228 05:55:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.228 05:55:48 -- common/autotest_common.sh@10 -- # set +x 00:08:57.228 ************************************ 00:08:57.228 START TEST nvme 00:08:57.228 ************************************ 00:08:57.228 05:55:48 nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:57.228 * Looking for test storage... 00:08:57.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:57.228 05:55:48 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:57.796 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:58.363 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:58.363 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:58.363 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:58.363 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:58.363 05:55:50 nvme -- nvme/nvme.sh@79 -- # uname 00:08:58.363 05:55:50 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:58.363 05:55:50 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:58.363 05:55:50 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:58.363 05:55:50 nvme -- common/autotest_common.sh@1080 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:58.363 05:55:50 nvme -- common/autotest_common.sh@1066 -- # _randomize_va_space=2 00:08:58.363 05:55:50 nvme -- common/autotest_common.sh@1067 -- # echo 0 00:08:58.363 Waiting for stub to ready for secondary processes... 00:08:58.363 05:55:50 nvme -- common/autotest_common.sh@1069 -- # stubpid=80192 00:08:58.363 05:55:50 nvme -- common/autotest_common.sh@1068 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:58.363 05:55:50 nvme -- common/autotest_common.sh@1070 -- # echo Waiting for stub to ready for secondary processes... 00:08:58.363 05:55:50 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:58.363 05:55:50 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/80192 ]] 00:08:58.363 05:55:50 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:08:58.621 [2024-07-13 05:55:50.128308] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:58.621 [2024-07-13 05:55:50.128846] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:59.189 [2024-07-13 05:55:50.914126] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:59.448 [2024-07-13 05:55:50.951689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.448 [2024-07-13 05:55:50.951823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.448 [2024-07-13 05:55:50.951894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:59.449 [2024-07-13 05:55:50.972457] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:59.449 [2024-07-13 05:55:50.972548] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:59.449 [2024-07-13 05:55:50.985339] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:59.449 [2024-07-13 05:55:50.985585] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:59.449 [2024-07-13 05:55:50.986243] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:59.449 [2024-07-13 05:55:50.986477] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:59.449 [2024-07-13 05:55:50.986580] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:59.449 [2024-07-13 05:55:50.987497] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:59.449 [2024-07-13 05:55:50.988066] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:59.449 [2024-07-13 05:55:50.988205] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:59.449 [2024-07-13 05:55:50.989120] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:59.449 [2024-07-13 05:55:50.989445] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:59.449 [2024-07-13 05:55:50.989675] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:59.449 [2024-07-13 05:55:50.989852] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:59.449 [2024-07-13 05:55:50.990233] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:59.449 done. 00:08:59.449 05:55:51 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:59.449 05:55:51 nvme -- common/autotest_common.sh@1076 -- # echo done. 00:08:59.449 05:55:51 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:59.449 05:55:51 nvme -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:08:59.449 05:55:51 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.449 05:55:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:59.449 ************************************ 00:08:59.449 START TEST nvme_reset 00:08:59.449 ************************************ 00:08:59.449 05:55:51 nvme.nvme_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:59.708 Initializing NVMe Controllers 00:08:59.708 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:59.708 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:59.708 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:59.708 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:59.708 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:59.708 ************************************ 00:08:59.708 END TEST nvme_reset 00:08:59.708 ************************************ 00:08:59.708 00:08:59.708 real 0m0.248s 00:08:59.708 user 0m0.088s 00:08:59.708 sys 0m0.107s 00:08:59.708 05:55:51 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:59.708 05:55:51 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:59.708 05:55:51 nvme -- common/autotest_common.sh@1142 -- # return 0 00:08:59.708 05:55:51 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:59.708 05:55:51 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:59.708 05:55:51 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.708 05:55:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:59.708 ************************************ 00:08:59.708 START TEST nvme_identify 00:08:59.708 ************************************ 00:08:59.708 05:55:51 nvme.nvme_identify -- common/autotest_common.sh@1123 -- # nvme_identify 00:08:59.708 05:55:51 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:59.708 05:55:51 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:59.708 05:55:51 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:59.708 05:55:51 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:59.708 05:55:51 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:08:59.708 05:55:51 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:08:59.708 05:55:51 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:59.708 05:55:51 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:59.708 05:55:51 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:08:59.969 05:55:51 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:08:59.969 05:55:51 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:59.969 05:55:51 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:59.969 ===================================================== 00:08:59.969 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:59.969 ===================================================== 00:08:59.969 Controller Capabilities/Features 00:08:59.969 ================================ 00:08:59.969 Vendor ID: 1b36 00:08:59.969 Subsystem Vendor ID: 1af4 00:08:59.969 Serial Number: 12340 00:08:59.969 Model Number: QEMU NVMe Ctrl 00:08:59.969 Firmware Version: 8.0.0 00:08:59.969 Recommended Arb Burst: 6 00:08:59.969 IEEE OUI Identifier: 00 54 52 00:08:59.969 Multi-path I/O 00:08:59.969 May have multiple subsystem ports: No 00:08:59.970 May have multiple controllers: No 00:08:59.970 Associated with SR-IOV VF: No 00:08:59.970 Max Data Transfer Size: 524288 00:08:59.970 Max Number of Namespaces: 256 00:08:59.970 Max Number of I/O Queues: 64 00:08:59.970 NVMe Specification Version (VS): 1.4 00:08:59.970 NVMe Specification Version (Identify): 1.4 00:08:59.970 Maximum Queue Entries: 2048 00:08:59.970 Contiguous Queues Required: Yes 00:08:59.970 Arbitration Mechanisms Supported 00:08:59.970 Weighted Round Robin: Not Supported 00:08:59.970 Vendor Specific: Not Supported 00:08:59.970 Reset Timeout: 7500 ms 00:08:59.970 Doorbell Stride: 4 bytes 00:08:59.970 NVM Subsystem Reset: Not Supported 00:08:59.970 Command Sets Supported 00:08:59.970 NVM Command Set: Supported 00:08:59.970 Boot Partition: Not Supported 00:08:59.970 Memory Page Size Minimum: 4096 bytes 00:08:59.970 Memory Page Size Maximum: 65536 bytes 00:08:59.970 Persistent Memory Region: Not Supported 00:08:59.970 Optional Asynchronous Events Supported 00:08:59.970 Namespace Attribute Notices: Supported 00:08:59.970 Firmware Activation Notices: Not Supported 00:08:59.970 ANA Change Notices: Not Supported 00:08:59.970 PLE Aggregate Log Change Notices: Not Supported 00:08:59.970 LBA Status Info Alert Notices: Not Supported 00:08:59.970 EGE Aggregate Log Change Notices: Not Supported 00:08:59.970 Normal NVM Subsystem Shutdown event: Not Supported 00:08:59.970 Zone Descriptor Change Notices: Not Supported 00:08:59.970 Discovery Log Change Notices: Not Supported 00:08:59.970 Controller Attributes 00:08:59.970 128-bit Host Identifier: Not Supported 00:08:59.970 Non-Operational Permissive Mode: Not Supported 00:08:59.970 NVM Sets: Not Supported 00:08:59.970 Read Recovery Levels: Not Supported 00:08:59.970 Endurance Groups: Not Supported 00:08:59.970 Predictable Latency Mode: Not Supported 00:08:59.970 Traffic Based Keep ALive: Not Supported 00:08:59.970 Namespace Granularity: Not Supported 00:08:59.970 SQ Associations: Not Supported 00:08:59.970 UUID List: Not Supported 00:08:59.970 Multi-Domain Subsystem: Not Supported 00:08:59.970 Fixed Capacity Management: Not Supported 00:08:59.970 Variable Capacity Management: Not Supported 00:08:59.970 Delete Endurance Group: Not Supported 00:08:59.970 Delete NVM Set: Not Supported 00:08:59.970 Extended LBA Formats Supported: Supported 00:08:59.970 Flexible Data Placement Supported: Not Supported 00:08:59.970 00:08:59.970 Controller Memory Buffer Support 00:08:59.970 ================================ 00:08:59.970 Supported: No 00:08:59.970 00:08:59.970 Persistent Memory Region Support 00:08:59.970 ================================ 00:08:59.970 Supported: No 00:08:59.970 00:08:59.970 Admin Command Set Attributes 00:08:59.970 ============================ 00:08:59.970 Security Send/Receive: Not Supported 00:08:59.970 Format NVM: Supported 00:08:59.970 Firmware Activate/Download: Not Supported 00:08:59.970 Namespace Management: Supported 00:08:59.970 Device Self-Test: Not Supported 00:08:59.970 Directives: Supported 00:08:59.970 NVMe-MI: Not Supported 00:08:59.970 Virtualization Management: Not Supported 00:08:59.970 Doorbell Buffer Config: Supported 00:08:59.970 Get LBA Status Capability: Not Supported 00:08:59.970 Command & Feature Lockdown Capability: Not Supported 00:08:59.970 Abort Command Limit: 4 00:08:59.970 Async Event Request Limit: 4 00:08:59.970 Number of Firmware Slots: N/A 00:08:59.970 Firmware Slot 1 Read-Only: N/A 00:08:59.970 Firmware Activation Without Reset: N/A 00:08:59.970 Multiple Update Detection Support: N/A 00:08:59.970 Firmware Update Granularity: No Information Provided 00:08:59.970 Per-Namespace SMART Log: Yes 00:08:59.970 Asymmetric Namespace Access Log Page: Not Supported 00:08:59.970 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:59.970 Command Effects Log Page: Supported 00:08:59.970 Get Log Page Extended Data: Supported 00:08:59.970 Telemetry Log Pages: Not Supported 00:08:59.970 Persistent Event Log Pages: Not Supported 00:08:59.970 Supported Log Pages Log Page: May Support 00:08:59.970 Commands Supported & Effects Log Page: Not Supported 00:08:59.970 Feature Identifiers & Effects Log Page:May Support 00:08:59.970 NVMe-MI Commands & Effects Log Page: May Support 00:08:59.970 Data Area 4 for Telemetry Log: Not Supported 00:08:59.970 Error Log Page Entries Supported: 1 00:08:59.970 Keep Alive: Not Supported 00:08:59.970 00:08:59.970 NVM Command Set Attributes 00:08:59.970 ========================== 00:08:59.970 Submission Queue Entry Size 00:08:59.970 Max: 64 00:08:59.970 Min: 64 00:08:59.970 Completion Queue Entry Size 00:08:59.970 Max: 16 00:08:59.970 Min: 16 00:08:59.970 Number of Namespaces: 256 00:08:59.970 Compare Command: Supported 00:08:59.970 Write Uncorrectable Command: Not Supported 00:08:59.970 Dataset Management Command: Supported 00:08:59.970 Write Zeroes Command: Supported 00:08:59.970 Set Features Save Field: Supported 00:08:59.970 Reservations: Not Supported 00:08:59.970 Timestamp: Supported 00:08:59.970 Copy: Supported 00:08:59.970 Volatile Write Cache: Present 00:08:59.970 Atomic Write Unit (Normal): 1 00:08:59.970 Atomic Write Unit (PFail): 1 00:08:59.970 Atomic Compare & Write Unit: 1 00:08:59.970 Fused Compare & Write: Not Supported 00:08:59.970 Scatter-Gather List 00:08:59.970 SGL Command Set: Supported 00:08:59.970 SGL Keyed: Not Supported 00:08:59.970 SGL Bit Bucket Descriptor: Not Supported 00:08:59.970 SGL Metadata Pointer: Not Supported 00:08:59.970 Oversized SGL: Not Supported 00:08:59.970 SGL Metadata Address: Not Supported 00:08:59.970 SGL Offset: Not Supported 00:08:59.970 Transport SGL Data Block: Not Supported 00:08:59.970 Replay Protected Memory Block: Not Supported 00:08:59.970 00:08:59.970 Firmware Slot Information 00:08:59.970 ========================= 00:08:59.970 Active slot: 1 00:08:59.970 Slot 1 Firmware Revision: 1.0 00:08:59.970 00:08:59.970 00:08:59.970 Commands Supported and Effects 00:08:59.970 ============================== 00:08:59.970 Admin Commands 00:08:59.970 -------------- 00:08:59.970 Delete I/O Submission Queue (00h): Supported 00:08:59.970 Create I/O Submission Queue (01h): Supported 00:08:59.970 Get Log Page (02h): Supported 00:08:59.970 Delete I/O Completion Queue (04h): Supported 00:08:59.970 Create I/O Completion Queue (05h): Supported 00:08:59.970 Identify (06h): Supported 00:08:59.970 Abort (08h): Supported 00:08:59.970 Set Features (09h): Supported 00:08:59.970 Get Features (0Ah): Supported 00:08:59.970 Asynchronous Event Request (0Ch): Supported 00:08:59.970 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:59.970 Directive Send (19h): Supported 00:08:59.970 Directive Receive (1Ah): Supported 00:08:59.970 Virtualization Management (1Ch): Supported 00:08:59.970 Doorbell Buffer Config (7Ch): Supported 00:08:59.970 Format NVM (80h): Supported LBA-Change 00:08:59.970 I/O Commands 00:08:59.970 ------------ 00:08:59.970 Flush (00h): Supported LBA-Change 00:08:59.970 Write (01h): Supported LBA-Change 00:08:59.970 Read (02h): Supported 00:08:59.970 Compare (05h): Supported 00:08:59.970 Write Zeroes (08h): Supported LBA-Change 00:08:59.970 Dataset Management (09h): Supported LBA-Change 00:08:59.970 Unknown (0Ch): Supported 00:08:59.970 Unknown (12h): Supported 00:08:59.970 Copy (19h): Supported LBA-Change 00:08:59.970 Unknown (1Dh): Supported LBA-Change 00:08:59.970 00:08:59.970 Error Log 00:08:59.970 ========= 00:08:59.970 00:08:59.970 Arbitration 00:08:59.970 =========== 00:08:59.970 Arbitration Burst: no limit 00:08:59.970 00:08:59.970 Power Management 00:08:59.970 ================ 00:08:59.970 Number of Power States: 1 00:08:59.970 Current Power State: Power State #0 00:08:59.970 Power State #0: 00:08:59.970 Max Power: 25.00 W 00:08:59.970 Non-Operational State: Operational 00:08:59.970 Entry Latency: 16 microseconds 00:08:59.970 Exit Latency: 4 microseconds 00:08:59.970 Relative Read Throughput: 0 00:08:59.970 Relative Read Latency: 0 00:08:59.970 Relative Write Throughput: 0 00:08:59.970 Relative Write Latency: 0 00:08:59.970 Idle Power: Not Reported 00:08:59.970 Active Power: Not Reported 00:08:59.970 Non-Operational Permissive Mode: Not Supported 00:08:59.970 00:08:59.970 Health Information 00:08:59.970 ================== 00:08:59.970 Critical Warnings: 00:08:59.970 Available Spare Space: OK 00:08:59.970 Temperature: OK 00:08:59.970 Device Reliability: OK 00:08:59.970 Read Only: No 00:08:59.970 Volatile Memory Backup: OK 00:08:59.970 Current Temperature: 323 Kelvin (50 Celsius) 00:08:59.970 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:59.970 Available Spare: 0% 00:08:59.970 Available Spare Threshold: 0% 00:08:59.970 Life Percentage Used: 0% 00:08:59.970 Data Units Read: 1030 00:08:59.970 Data Units Written: 857 00:08:59.970 Host Read Commands: 48344 00:08:59.970 Host Write Commands: 46786 00:08:59.970 Controller Busy Time: 0 minutes 00:08:59.970 Power Cycles: 0 00:08:59.970 Power On Hours: 0 hours 00:08:59.970 Unsafe Shutdowns: 0 00:08:59.970 Unrecoverable Media Errors: 0 00:08:59.970 Lifetime Error Log Entries: 0 00:08:59.970 Warning Temperature Time: 0 minutes 00:08:59.970 Critical Temperature Time: 0 minutes 00:08:59.970 00:08:59.970 Number of Queues 00:08:59.970 ================ 00:08:59.970 Number of I/O Submission Queues: 64 00:08:59.971 Number of I/O Completion Queues: 64 00:08:59.971 00:08:59.971 ZNS Specific Controller Data 00:08:59.971 ============================ 00:08:59.971 Zone Append Size Limit: 0 00:08:59.971 00:08:59.971 00:08:59.971 Active Namespaces 00:08:59.971 ================= 00:08:59.971 Namespace ID:1 00:08:59.971 Error Recovery Timeout: Unlimited 00:08:59.971 Command Set Identifier: NVM (00h) 00:08:59.971 Deallocate: Supported 00:08:59.971 Deallocated/Unwritten Error: Supported 00:08:59.971 Deallocated Read Value: All 0x00 00:08:59.971 Deallocate in Write Zeroes: Not Supported 00:08:59.971 Deallocated Guard Field: 0xFFFF 00:08:59.971 Flush: Supported 00:08:59.971 Reservation: Not Supported 00:08:59.971 Metadata Transferred as: Separate Metadata Buffer 00:08:59.971 Namespace Sharing Capabilities: Private 00:08:59.971 Size (in LBAs): 1548666 (5GiB) 00:08:59.971 Capacity (in LBAs): 1548666 (5GiB) 00:08:59.971 Utilization (in LBAs): 1548666 (5GiB) 00:08:59.971 Thin Provisioning: Not Supported 00:08:59.971 Per-NS Atomic Units: No 00:08:59.971 Maximum Single Source Range Length: 128 00:08:59.971 Maximum Copy Length: 128 00:08:59.971 Maximum Source Range Count: 128 00:08:59.971 NGUID/EUI64 Never Reused: No 00:08:59.971 Namespace Write Protected: No 00:08:59.971 Number of LBA Formats: 8 00:08:59.971 Current LBA Format: LBA Format #07 00:08:59.971 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:59.971 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:59.971 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:59.971 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:59.971 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:59.971 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:59.971 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:59.971 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:59.971 00:08:59.971 NVM Specific Namespace Data 00:08:59.971 =========================== 00:08:59.971 Logical Block Storage Tag Mask: 0 00:08:59.971 Protection Information Capabilities: 00:08:59.971 16b Guard Protection Information Storage Tag Support: No 00:08:59.971 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:59.971 Storage Tag Check Read Support: No 00:08:59.971 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.971 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.971 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.971 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.971 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.971 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.971 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.971 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.971 ===================================================== 00:08:59.971 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:59.971 ===================================================== 00:08:59.971 Controller Capabilities/Features 00:08:59.971 ================================ 00:08:59.971 Vendor ID: 1b36 00:08:59.971 Subsystem Vendor ID: 1af4 00:08:59.971 Serial Number: 12341 00:08:59.971 Model Number: QEMU NVMe Ctrl 00:08:59.971 Firmware Version: 8.0.0 00:08:59.971 Recommended Arb Burst: 6 00:08:59.971 IEEE OUI Identifier: 00 54 52 00:08:59.971 Multi-path I/O 00:08:59.971 May have multiple subsystem ports: No 00:08:59.971 May have multiple controllers: No 00:08:59.971 Associated with SR-IOV VF: No 00:08:59.971 Max Data Transfer Size: 524288 00:08:59.971 Max Number of Namespaces: 256 00:08:59.971 Max Number of I/O Queues: 64 00:08:59.971 NVMe Specification Version (VS): 1.4 00:08:59.971 NVMe Specification Version (Identify): 1.4 00:08:59.971 Maximum Queue Entries: 2048 00:08:59.971 Contiguous Queues Required: Yes 00:08:59.971 Arbitration Mechanisms Supported 00:08:59.971 Weighted Round Robin: Not Supported 00:08:59.971 Vendor Specific: Not Supported 00:08:59.971 Reset Timeout: 7500 ms 00:08:59.971 Doorbell Stride: 4 bytes 00:08:59.971 NVM Subsystem Reset: Not Supported 00:08:59.971 Command Sets Supported 00:08:59.971 NVM Command Set: Supported 00:08:59.971 Boot Partition: Not Supported 00:08:59.971 Memory Page Size Minimum: 4096 bytes 00:08:59.971 Memory Page Size Maximum: 65536 bytes 00:08:59.971 Persistent Memory Region: Not Supported 00:08:59.971 Optional Asynchronous Events Supported 00:08:59.971 Namespace Attribute Notices: Supported 00:08:59.971 Firmware Activation Notices: Not Supported 00:08:59.971 ANA Change Notices: Not Supported 00:08:59.971 PLE Aggregate Log Change Notices: Not Supported 00:08:59.971 LBA Status Info Alert Notices: Not Supported 00:08:59.971 EGE Aggregate Log Change Notices: Not Supported 00:08:59.971 Normal NVM Subsystem Shutdown event: Not Supported 00:08:59.971 Zone Descriptor Change Notices: Not Supported 00:08:59.971 Discovery Log Change Notices: Not Supported 00:08:59.971 Controller Attributes 00:08:59.971 128-bit Host Identifier: Not Supported 00:08:59.971 Non-Operational Permissive Mode: Not Supported 00:08:59.971 NVM Sets: Not Supported 00:08:59.971 Read Recovery Levels: Not Supported 00:08:59.971 Endurance Groups: Not Supported 00:08:59.971 Predictable Latency Mode: Not Supported 00:08:59.971 Traffic Based Keep ALive: Not Supported 00:08:59.971 Namespace Granularity: Not Supported 00:08:59.971 SQ Associations: Not Supported 00:08:59.971 UUID List: Not Supported 00:08:59.971 Multi-Domain Subsystem: Not Supported 00:08:59.971 Fixed Capacity Management: Not Supported 00:08:59.971 Variable Capacity Management: Not Supported 00:08:59.971 Delete Endurance Group: Not Supported 00:08:59.971 Delete NVM Set: Not Supported 00:08:59.971 Extended LBA Formats Supported: Supported 00:08:59.971 Flexible Data Placement Supported: Not Supported 00:08:59.971 00:08:59.971 Controller Memory Buffer Support 00:08:59.971 ================================ 00:08:59.971 Supported: No 00:08:59.971 00:08:59.971 Persistent Memory Region Support 00:08:59.971 ================================ 00:08:59.971 Supported: No 00:08:59.971 00:08:59.971 Admin Command Set Attributes 00:08:59.971 ============================ 00:08:59.971 Security Send/Receive: Not Supported 00:08:59.971 Format NVM: Supported 00:08:59.971 Firmware Activate/Download: Not Supported 00:08:59.971 Namespace Management: Supported 00:08:59.971 Device Self-Test: Not Supported 00:08:59.971 Directives: Supported 00:08:59.971 NVMe-MI: Not Supported 00:08:59.971 Virtualization Management: Not Supported 00:08:59.971 Doorbell Buffer Config: Supported 00:08:59.971 Get LBA Status Capability: Not Supported 00:08:59.971 Command & Feature Lockdown Capability: Not Supported 00:08:59.971 Abort Command Limit: 4 00:08:59.971 Async Event Request Limit: 4 00:08:59.971 Number of Firmware Slots: N/A 00:08:59.971 Firmware Slot 1 Read-Only: N/A 00:08:59.971 Firmware Activation Without Reset: N/A 00:08:59.971 Multiple Update Detection Support: N/A 00:08:59.971 Firmware Update Granularity: No Information Provided 00:08:59.971 Per-Namespace SMART Log: Yes 00:08:59.971 Asymmetric Namespace Access Log Page: Not Supported 00:08:59.971 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:59.971 Command Effects Log Page: Supported 00:08:59.971 Get Log Page Extended Data: Supported 00:08:59.971 Telemetry Log Pages: Not Supported 00:08:59.971 Persistent Event Log Pages: Not Supported 00:08:59.971 Supported Log Pages Log Page: May Support 00:08:59.971 Commands Supported & Effects Log Page: Not Supported 00:08:59.971 Feature Identifiers & Effects Log Page:May Support 00:08:59.971 NVMe-MI Commands & Effects Log Page: May Support 00:08:59.971 Data Area 4 for Telemetry Log: Not Supported 00:08:59.971 Error Log Page Entries Supported: 1 00:08:59.971 Keep Alive: Not Supported 00:08:59.971 00:08:59.971 NVM Command Set Attributes 00:08:59.971 ========================== 00:08:59.971 Submission Queue Entry Size 00:08:59.971 Max: 64 00:08:59.971 Min: 64 00:08:59.971 Completion Queue Entry Size 00:08:59.971 Max: 16 00:08:59.971 Min: 16 00:08:59.971 Number of Namespaces: 256 00:08:59.971 Compare Command: Supported 00:08:59.971 Write Uncorrectable Command: Not Supported 00:08:59.971 Dataset Management Command: Supported 00:08:59.971 Write Zeroes Command: Supported 00:08:59.971 Set Features Save Field: Supported 00:08:59.971 Reservations: Not Supported 00:08:59.971 Timestamp: Supported 00:08:59.971 Copy: Supported 00:08:59.971 Volatile Write Cache: Present 00:08:59.971 Atomic Write Unit (Normal): 1 00:08:59.971 Atomic Write Unit (PFail): 1 00:08:59.971 Atomic Compare & Write Unit: 1 00:08:59.971 Fused Compare & Write: Not Supported 00:08:59.971 Scatter-Gather List 00:08:59.971 SGL Command Set: Supported 00:08:59.971 SGL Keyed: Not Supported 00:08:59.971 SGL Bit Bucket Descriptor: Not Supported 00:08:59.971 SGL Metadata Pointer: Not Supported 00:08:59.971 Oversized SGL: Not Supported 00:08:59.971 SGL Metadata Address: Not Supported 00:08:59.971 SGL Offset: Not Supported 00:08:59.971 Transport SGL Data Block: Not Supported 00:08:59.971 Replay Protected Memory Block: Not Supported 00:08:59.971 00:08:59.971 Firmware Slot Information 00:08:59.971 ========================= 00:08:59.971 Active slot: 1 00:08:59.971 Slot 1 Firmware Revision: 1.0 00:08:59.971 00:08:59.971 00:08:59.971 Commands Supported and Effects 00:08:59.971 ============================== 00:08:59.971 Admin Commands 00:08:59.971 -------------- 00:08:59.972 Delete I/O Submission Queue (00h): Supported 00:08:59.972 Create I/O Submission Queue (01h): Supported 00:08:59.972 Get Log Page (02h): Supported 00:08:59.972 Delete I/O Completion Queue (04h): Supported 00:08:59.972 Create I/O Completion Queue (05h): Supported 00:08:59.972 Identify (06h): Supported 00:08:59.972 Abort (08h): Supported 00:08:59.972 Set Features (09h): Supported 00:08:59.972 Get Features (0Ah): Supported 00:08:59.972 Asynchronous Event Request (0Ch): Supported 00:08:59.972 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:59.972 Directive Send (19h): Supported 00:08:59.972 Directive Receive (1Ah): Supported 00:08:59.972 Virtualization Management (1Ch): Supported 00:08:59.972 Doorbell Buffer Config (7Ch): Supported 00:08:59.972 Format NVM (80h): Supported LBA-Change 00:08:59.972 I/O Commands 00:08:59.972 ------------ 00:08:59.972 Flush (00h): Supported LBA-Change 00:08:59.972 Write (01h): Supported LBA-Change 00:08:59.972 Read (02h): Supported 00:08:59.972 Compare (05h): Supported 00:08:59.972 Write Zeroes (08h): Supported LBA-Change 00:08:59.972 Dataset Management (09h): Supported LBA-Change 00:08:59.972 Unknown (0Ch): Supported 00:08:59.972 Unknown (12h): Supported 00:08:59.972 Copy (19h): Supported LBA-Change 00:08:59.972 Unknown (1Dh): Supported LBA-Change 00:08:59.972 00:08:59.972 Error Log 00:08:59.972 ========= 00:08:59.972 00:08:59.972 Arbitration 00:08:59.972 =========== 00:08:59.972 Arbitration Burst: no limit 00:08:59.972 00:08:59.972 Power Management 00:08:59.972 ================ 00:08:59.972 Number of Power States: 1 00:08:59.972 Current Power State: Power State #0 00:08:59.972 Power State #0: 00:08:59.972 Max Power: 25.00 W 00:08:59.972 Non-Operational State: Operational 00:08:59.972 Entry Latency: 16 microseconds 00:08:59.972 Exit Latency: 4 microseconds 00:08:59.972 Relative Read Throughput: 0 00:08:59.972 Relative Read Latency: 0 00:08:59.972 Relative Write Throughput: 0 00:08:59.972 Relative Write Latency: 0 00:08:59.972 Idle Power: Not Reported 00:08:59.972 Active Power: Not Reported 00:08:59.972 Non-Operational Permissive Mode: Not Supported 00:08:59.972 00:08:59.972 Health Information 00:08:59.972 ================== 00:08:59.972 Critical Warnings: 00:08:59.972 Available Spare Space: OK 00:08:59.972 Temperature: [2024-07-13 05:55:51.652692] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 80213 terminated unexpected 00:08:59.972 [2024-07-13 05:55:51.653987] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 80213 terminated unexpected 00:08:59.972 [2024-07-13 05:55:51.654803] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 80213 terminated unexpected 00:08:59.972 OK 00:08:59.972 Device Reliability: OK 00:08:59.972 Read Only: No 00:08:59.972 Volatile Memory Backup: OK 00:08:59.972 Current Temperature: 323 Kelvin (50 Celsius) 00:08:59.972 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:59.972 Available Spare: 0% 00:08:59.972 Available Spare Threshold: 0% 00:08:59.972 Life Percentage Used: 0% 00:08:59.972 Data Units Read: 749 00:08:59.972 Data Units Written: 597 00:08:59.972 Host Read Commands: 34388 00:08:59.972 Host Write Commands: 32070 00:08:59.972 Controller Busy Time: 0 minutes 00:08:59.972 Power Cycles: 0 00:08:59.972 Power On Hours: 0 hours 00:08:59.972 Unsafe Shutdowns: 0 00:08:59.972 Unrecoverable Media Errors: 0 00:08:59.972 Lifetime Error Log Entries: 0 00:08:59.972 Warning Temperature Time: 0 minutes 00:08:59.972 Critical Temperature Time: 0 minutes 00:08:59.972 00:08:59.972 Number of Queues 00:08:59.972 ================ 00:08:59.972 Number of I/O Submission Queues: 64 00:08:59.972 Number of I/O Completion Queues: 64 00:08:59.972 00:08:59.972 ZNS Specific Controller Data 00:08:59.972 ============================ 00:08:59.972 Zone Append Size Limit: 0 00:08:59.972 00:08:59.972 00:08:59.972 Active Namespaces 00:08:59.972 ================= 00:08:59.972 Namespace ID:1 00:08:59.972 Error Recovery Timeout: Unlimited 00:08:59.972 Command Set Identifier: NVM (00h) 00:08:59.972 Deallocate: Supported 00:08:59.972 Deallocated/Unwritten Error: Supported 00:08:59.972 Deallocated Read Value: All 0x00 00:08:59.972 Deallocate in Write Zeroes: Not Supported 00:08:59.972 Deallocated Guard Field: 0xFFFF 00:08:59.972 Flush: Supported 00:08:59.972 Reservation: Not Supported 00:08:59.972 Namespace Sharing Capabilities: Private 00:08:59.972 Size (in LBAs): 1310720 (5GiB) 00:08:59.972 Capacity (in LBAs): 1310720 (5GiB) 00:08:59.972 Utilization (in LBAs): 1310720 (5GiB) 00:08:59.972 Thin Provisioning: Not Supported 00:08:59.972 Per-NS Atomic Units: No 00:08:59.972 Maximum Single Source Range Length: 128 00:08:59.972 Maximum Copy Length: 128 00:08:59.972 Maximum Source Range Count: 128 00:08:59.972 NGUID/EUI64 Never Reused: No 00:08:59.972 Namespace Write Protected: No 00:08:59.972 Number of LBA Formats: 8 00:08:59.972 Current LBA Format: LBA Format #04 00:08:59.972 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:59.972 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:59.972 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:59.972 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:59.972 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:59.972 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:59.972 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:59.972 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:59.972 00:08:59.972 NVM Specific Namespace Data 00:08:59.972 =========================== 00:08:59.972 Logical Block Storage Tag Mask: 0 00:08:59.972 Protection Information Capabilities: 00:08:59.972 16b Guard Protection Information Storage Tag Support: No 00:08:59.972 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:59.972 Storage Tag Check Read Support: No 00:08:59.972 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.972 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.972 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.972 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.972 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.972 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.972 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.972 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.972 ===================================================== 00:08:59.972 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:59.972 ===================================================== 00:08:59.972 Controller Capabilities/Features 00:08:59.972 ================================ 00:08:59.972 Vendor ID: 1b36 00:08:59.972 Subsystem Vendor ID: 1af4 00:08:59.972 Serial Number: 12343 00:08:59.972 Model Number: QEMU NVMe Ctrl 00:08:59.972 Firmware Version: 8.0.0 00:08:59.972 Recommended Arb Burst: 6 00:08:59.972 IEEE OUI Identifier: 00 54 52 00:08:59.972 Multi-path I/O 00:08:59.972 May have multiple subsystem ports: No 00:08:59.972 May have multiple controllers: Yes 00:08:59.972 Associated with SR-IOV VF: No 00:08:59.972 Max Data Transfer Size: 524288 00:08:59.972 Max Number of Namespaces: 256 00:08:59.972 Max Number of I/O Queues: 64 00:08:59.972 NVMe Specification Version (VS): 1.4 00:08:59.972 NVMe Specification Version (Identify): 1.4 00:08:59.972 Maximum Queue Entries: 2048 00:08:59.972 Contiguous Queues Required: Yes 00:08:59.972 Arbitration Mechanisms Supported 00:08:59.972 Weighted Round Robin: Not Supported 00:08:59.972 Vendor Specific: Not Supported 00:08:59.972 Reset Timeout: 7500 ms 00:08:59.972 Doorbell Stride: 4 bytes 00:08:59.972 NVM Subsystem Reset: Not Supported 00:08:59.972 Command Sets Supported 00:08:59.972 NVM Command Set: Supported 00:08:59.972 Boot Partition: Not Supported 00:08:59.972 Memory Page Size Minimum: 4096 bytes 00:08:59.972 Memory Page Size Maximum: 65536 bytes 00:08:59.972 Persistent Memory Region: Not Supported 00:08:59.972 Optional Asynchronous Events Supported 00:08:59.972 Namespace Attribute Notices: Supported 00:08:59.972 Firmware Activation Notices: Not Supported 00:08:59.972 ANA Change Notices: Not Supported 00:08:59.972 PLE Aggregate Log Change Notices: Not Supported 00:08:59.972 LBA Status Info Alert Notices: Not Supported 00:08:59.972 EGE Aggregate Log Change Notices: Not Supported 00:08:59.972 Normal NVM Subsystem Shutdown event: Not Supported 00:08:59.972 Zone Descriptor Change Notices: Not Supported 00:08:59.972 Discovery Log Change Notices: Not Supported 00:08:59.972 Controller Attributes 00:08:59.972 128-bit Host Identifier: Not Supported 00:08:59.972 Non-Operational Permissive Mode: Not Supported 00:08:59.972 NVM Sets: Not Supported 00:08:59.972 Read Recovery Levels: Not Supported 00:08:59.972 Endurance Groups: Supported 00:08:59.972 Predictable Latency Mode: Not Supported 00:08:59.972 Traffic Based Keep ALive: Not Supported 00:08:59.972 Namespace Granularity: Not Supported 00:08:59.972 SQ Associations: Not Supported 00:08:59.972 UUID List: Not Supported 00:08:59.972 Multi-Domain Subsystem: Not Supported 00:08:59.972 Fixed Capacity Management: Not Supported 00:08:59.972 Variable Capacity Management: Not Supported 00:08:59.972 Delete Endurance Group: Not Supported 00:08:59.972 Delete NVM Set: Not Supported 00:08:59.972 Extended LBA Formats Supported: Supported 00:08:59.972 Flexible Data Placement Supported: Supported 00:08:59.972 00:08:59.973 Controller Memory Buffer Support 00:08:59.973 ================================ 00:08:59.973 Supported: No 00:08:59.973 00:08:59.973 Persistent Memory Region Support 00:08:59.973 ================================ 00:08:59.973 Supported: No 00:08:59.973 00:08:59.973 Admin Command Set Attributes 00:08:59.973 ============================ 00:08:59.973 Security Send/Receive: Not Supported 00:08:59.973 Format NVM: Supported 00:08:59.973 Firmware Activate/Download: Not Supported 00:08:59.973 Namespace Management: Supported 00:08:59.973 Device Self-Test: Not Supported 00:08:59.973 Directives: Supported 00:08:59.973 NVMe-MI: Not Supported 00:08:59.973 Virtualization Management: Not Supported 00:08:59.973 Doorbell Buffer Config: Supported 00:08:59.973 Get LBA Status Capability: Not Supported 00:08:59.973 Command & Feature Lockdown Capability: Not Supported 00:08:59.973 Abort Command Limit: 4 00:08:59.973 Async Event Request Limit: 4 00:08:59.973 Number of Firmware Slots: N/A 00:08:59.973 Firmware Slot 1 Read-Only: N/A 00:08:59.973 Firmware Activation Without Reset: N/A 00:08:59.973 Multiple Update Detection Support: N/A 00:08:59.973 Firmware Update Granularity: No Information Provided 00:08:59.973 Per-Namespace SMART Log: Yes 00:08:59.973 Asymmetric Namespace Access Log Page: Not Supported 00:08:59.973 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:59.973 Command Effects Log Page: Supported 00:08:59.973 Get Log Page Extended Data: Supported 00:08:59.973 Telemetry Log Pages: Not Supported 00:08:59.973 Persistent Event Log Pages: Not Supported 00:08:59.973 Supported Log Pages Log Page: May Support 00:08:59.973 Commands Supported & Effects Log Page: Not Supported 00:08:59.973 Feature Identifiers & Effects Log Page:May Support 00:08:59.973 NVMe-MI Commands & Effects Log Page: May Support 00:08:59.973 Data Area 4 for Telemetry Log: Not Supported 00:08:59.973 Error Log Page Entries Supported: 1 00:08:59.973 Keep Alive: Not Supported 00:08:59.973 00:08:59.973 NVM Command Set Attributes 00:08:59.973 ========================== 00:08:59.973 Submission Queue Entry Size 00:08:59.973 Max: 64 00:08:59.973 Min: 64 00:08:59.973 Completion Queue Entry Size 00:08:59.973 Max: 16 00:08:59.973 Min: 16 00:08:59.973 Number of Namespaces: 256 00:08:59.973 Compare Command: Supported 00:08:59.973 Write Uncorrectable Command: Not Supported 00:08:59.973 Dataset Management Command: Supported 00:08:59.973 Write Zeroes Command: Supported 00:08:59.973 Set Features Save Field: Supported 00:08:59.973 Reservations: Not Supported 00:08:59.973 Timestamp: Supported 00:08:59.973 Copy: Supported 00:08:59.973 Volatile Write Cache: Present 00:08:59.973 Atomic Write Unit (Normal): 1 00:08:59.973 Atomic Write Unit (PFail): 1 00:08:59.973 Atomic Compare & Write Unit: 1 00:08:59.973 Fused Compare & Write: Not Supported 00:08:59.973 Scatter-Gather List 00:08:59.973 SGL Command Set: Supported 00:08:59.973 SGL Keyed: Not Supported 00:08:59.973 SGL Bit Bucket Descriptor: Not Supported 00:08:59.973 SGL Metadata Pointer: Not Supported 00:08:59.973 Oversized SGL: Not Supported 00:08:59.973 SGL Metadata Address: Not Supported 00:08:59.973 SGL Offset: Not Supported 00:08:59.973 Transport SGL Data Block: Not Supported 00:08:59.973 Replay Protected Memory Block: Not Supported 00:08:59.973 00:08:59.973 Firmware Slot Information 00:08:59.973 ========================= 00:08:59.973 Active slot: 1 00:08:59.973 Slot 1 Firmware Revision: 1.0 00:08:59.973 00:08:59.973 00:08:59.973 Commands Supported and Effects 00:08:59.973 ============================== 00:08:59.973 Admin Commands 00:08:59.973 -------------- 00:08:59.973 Delete I/O Submission Queue (00h): Supported 00:08:59.973 Create I/O Submission Queue (01h): Supported 00:08:59.973 Get Log Page (02h): Supported 00:08:59.973 Delete I/O Completion Queue (04h): Supported 00:08:59.973 Create I/O Completion Queue (05h): Supported 00:08:59.973 Identify (06h): Supported 00:08:59.973 Abort (08h): Supported 00:08:59.973 Set Features (09h): Supported 00:08:59.973 Get Features (0Ah): Supported 00:08:59.973 Asynchronous Event Request (0Ch): Supported 00:08:59.973 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:59.973 Directive Send (19h): Supported 00:08:59.973 Directive Receive (1Ah): Supported 00:08:59.973 Virtualization Management (1Ch): Supported 00:08:59.973 Doorbell Buffer Config (7Ch): Supported 00:08:59.973 Format NVM (80h): Supported LBA-Change 00:08:59.973 I/O Commands 00:08:59.973 ------------ 00:08:59.973 Flush (00h): Supported LBA-Change 00:08:59.973 Write (01h): Supported LBA-Change 00:08:59.973 Read (02h): Supported 00:08:59.973 Compare (05h): Supported 00:08:59.973 Write Zeroes (08h): Supported LBA-Change 00:08:59.973 Dataset Management (09h): Supported LBA-Change 00:08:59.973 Unknown (0Ch): Supported 00:08:59.973 Unknown (12h): Supported 00:08:59.973 Copy (19h): Supported LBA-Change 00:08:59.973 Unknown (1Dh): Supported LBA-Change 00:08:59.973 00:08:59.973 Error Log 00:08:59.973 ========= 00:08:59.973 00:08:59.973 Arbitration 00:08:59.973 =========== 00:08:59.973 Arbitration Burst: no limit 00:08:59.973 00:08:59.973 Power Management 00:08:59.973 ================ 00:08:59.973 Number of Power States: 1 00:08:59.973 Current Power State: Power State #0 00:08:59.973 Power State #0: 00:08:59.973 Max Power: 25.00 W 00:08:59.973 Non-Operational State: Operational 00:08:59.973 Entry Latency: 16 microseconds 00:08:59.973 Exit Latency: 4 microseconds 00:08:59.973 Relative Read Throughput: 0 00:08:59.973 Relative Read Latency: 0 00:08:59.973 Relative Write Throughput: 0 00:08:59.973 Relative Write Latency: 0 00:08:59.973 Idle Power: Not Reported 00:08:59.973 Active Power: Not Reported 00:08:59.973 Non-Operational Permissive Mode: Not Supported 00:08:59.973 00:08:59.973 Health Information 00:08:59.973 ================== 00:08:59.973 Critical Warnings: 00:08:59.973 Available Spare Space: OK 00:08:59.973 Temperature: OK 00:08:59.973 Device Reliability: OK 00:08:59.973 Read Only: No 00:08:59.973 Volatile Memory Backup: OK 00:08:59.973 Current Temperature: 323 Kelvin (50 Celsius) 00:08:59.973 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:59.973 Available Spare: 0% 00:08:59.973 Available Spare Threshold: 0% 00:08:59.973 Life Percentage Used: 0% 00:08:59.973 Data Units Read: 811 00:08:59.973 Data Units Written: 704 00:08:59.973 Host Read Commands: 34387 00:08:59.973 Host Write Commands: 32977 00:08:59.973 Controller Busy Time: 0 minutes 00:08:59.973 Power Cycles: 0 00:08:59.973 Power On Hours: 0 hours 00:08:59.973 Unsafe Shutdowns: 0 00:08:59.973 Unrecoverable Media Errors: 0 00:08:59.973 Lifetime Error Log Entries: 0 00:08:59.973 Warning Temperature Time: 0 minutes 00:08:59.973 Critical Temperature Time: 0 minutes 00:08:59.973 00:08:59.973 Number of Queues 00:08:59.973 ================ 00:08:59.973 Number of I/O Submission Queues: 64 00:08:59.973 Number of I/O Completion Queues: 64 00:08:59.973 00:08:59.973 ZNS Specific Controller Data 00:08:59.973 ============================ 00:08:59.973 Zone Append Size Limit: 0 00:08:59.973 00:08:59.973 00:08:59.973 Active Namespaces 00:08:59.973 ================= 00:08:59.973 Namespace ID:1 00:08:59.973 Error Recovery Timeout: Unlimited 00:08:59.973 Command Set Identifier: NVM (00h) 00:08:59.973 Deallocate: Supported 00:08:59.973 Deallocated/Unwritten Error: Supported 00:08:59.973 Deallocated Read Value: All 0x00 00:08:59.973 Deallocate in Write Zeroes: Not Supported 00:08:59.973 Deallocated Guard Field: 0xFFFF 00:08:59.973 Flush: Supported 00:08:59.973 Reservation: Not Supported 00:08:59.973 Namespace Sharing Capabilities: Multiple Controllers 00:08:59.973 Size (in LBAs): 262144 (1GiB) 00:08:59.973 Capacity (in LBAs): 262144 (1GiB) 00:08:59.973 Utilization (in LBAs): 262144 (1GiB) 00:08:59.973 Thin Provisioning: Not Supported 00:08:59.973 Per-NS Atomic Units: No 00:08:59.973 Maximum Single Source Range Length: 128 00:08:59.973 Maximum Copy Length: 128 00:08:59.973 Maximum Source Range Count: 128 00:08:59.973 NGUID/EUI64 Never Reused: No 00:08:59.973 Namespace Write Protected: No 00:08:59.973 Endurance group ID: 1 00:08:59.973 Number of LBA Formats: 8 00:08:59.973 Current LBA Format: LBA Format #04 00:08:59.973 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:59.973 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:59.973 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:59.973 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:59.973 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:59.973 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:59.973 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:59.973 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:59.973 00:08:59.973 Get Feature FDP: 00:08:59.973 ================ 00:08:59.973 Enabled: Yes 00:08:59.973 FDP configuration index: 0 00:08:59.973 00:08:59.973 FDP configurations log page 00:08:59.973 =========================== 00:08:59.973 Number of FDP configurations: 1 00:08:59.973 Version: 0 00:08:59.973 Size: 112 00:08:59.973 FDP Configuration Descriptor: 0 00:08:59.974 Descriptor Size: 96 00:08:59.974 Reclaim Group Identifier format: 2 00:08:59.974 FDP Volatile Write Cache: Not Present 00:08:59.974 FDP Configuration: Valid 00:08:59.974 Vendor Specific Size: 0 00:08:59.974 Number of Reclaim Groups: 2 00:08:59.974 Number of Recalim Unit Handles: 8 00:08:59.974 Max Placement Identifiers: 128 00:08:59.974 Number of Namespaces Suppprted: 256 00:08:59.974 Reclaim unit Nominal Size: 6000000 bytes 00:08:59.974 Estimated Reclaim Unit Time Limit: Not Reported 00:08:59.974 RUH Desc #000: RUH Type: Initially Isolated 00:08:59.974 RUH Desc #001: RUH Type: Initially Isolated 00:08:59.974 RUH Desc #002: RUH Type: Initially Isolated 00:08:59.974 RUH Desc #003: RUH Type: Initially Isolated 00:08:59.974 RUH Desc #004: RUH Type: Initially Isolated 00:08:59.974 RUH Desc #005: RUH Type: Initially Isolated 00:08:59.974 RUH Desc #006: RUH Type: Initially Isolated 00:08:59.974 RUH Desc #007: RUH Type: Initially Isolated 00:08:59.974 00:08:59.974 FDP reclaim unit handle usage log page 00:08:59.974 ====================================== 00:08:59.974 Number of Reclaim Unit Handles: 8 00:08:59.974 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:59.974 RUH Usage Desc #001: RUH Attributes: Unused 00:08:59.974 RUH Usage Desc #002: RUH Attributes: Unused 00:08:59.974 RUH Usage Desc #003: RUH Attributes: Unused 00:08:59.974 RUH Usage Desc #004: RUH Attributes: Unused 00:08:59.974 RUH Usage Desc #005: RUH Attributes: Unused 00:08:59.974 RUH Usage Desc #006: RUH Attributes: Unused 00:08:59.974 RUH Usage Desc #007: RUH Attributes: Unused 00:08:59.974 00:08:59.974 FDP statistics log page 00:08:59.974 ======================= 00:08:59.974 Host bytes with metadata written: 442146816 00:08:59.974 Medi[2024-07-13 05:55:51.657598] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 80213 terminated unexpected 00:08:59.974 a bytes with metadata written: 442200064 00:08:59.974 Media bytes erased: 0 00:08:59.974 00:08:59.974 FDP events log page 00:08:59.974 =================== 00:08:59.974 Number of FDP events: 0 00:08:59.974 00:08:59.974 NVM Specific Namespace Data 00:08:59.974 =========================== 00:08:59.974 Logical Block Storage Tag Mask: 0 00:08:59.974 Protection Information Capabilities: 00:08:59.974 16b Guard Protection Information Storage Tag Support: No 00:08:59.974 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:59.974 Storage Tag Check Read Support: No 00:08:59.974 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.974 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.974 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.974 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.974 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.974 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.974 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.974 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.974 ===================================================== 00:08:59.974 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:59.974 ===================================================== 00:08:59.974 Controller Capabilities/Features 00:08:59.974 ================================ 00:08:59.974 Vendor ID: 1b36 00:08:59.974 Subsystem Vendor ID: 1af4 00:08:59.974 Serial Number: 12342 00:08:59.974 Model Number: QEMU NVMe Ctrl 00:08:59.974 Firmware Version: 8.0.0 00:08:59.974 Recommended Arb Burst: 6 00:08:59.974 IEEE OUI Identifier: 00 54 52 00:08:59.974 Multi-path I/O 00:08:59.974 May have multiple subsystem ports: No 00:08:59.974 May have multiple controllers: No 00:08:59.974 Associated with SR-IOV VF: No 00:08:59.974 Max Data Transfer Size: 524288 00:08:59.974 Max Number of Namespaces: 256 00:08:59.974 Max Number of I/O Queues: 64 00:08:59.974 NVMe Specification Version (VS): 1.4 00:08:59.974 NVMe Specification Version (Identify): 1.4 00:08:59.974 Maximum Queue Entries: 2048 00:08:59.974 Contiguous Queues Required: Yes 00:08:59.974 Arbitration Mechanisms Supported 00:08:59.974 Weighted Round Robin: Not Supported 00:08:59.974 Vendor Specific: Not Supported 00:08:59.974 Reset Timeout: 7500 ms 00:08:59.974 Doorbell Stride: 4 bytes 00:08:59.974 NVM Subsystem Reset: Not Supported 00:08:59.974 Command Sets Supported 00:08:59.974 NVM Command Set: Supported 00:08:59.974 Boot Partition: Not Supported 00:08:59.974 Memory Page Size Minimum: 4096 bytes 00:08:59.974 Memory Page Size Maximum: 65536 bytes 00:08:59.974 Persistent Memory Region: Not Supported 00:08:59.974 Optional Asynchronous Events Supported 00:08:59.974 Namespace Attribute Notices: Supported 00:08:59.974 Firmware Activation Notices: Not Supported 00:08:59.974 ANA Change Notices: Not Supported 00:08:59.974 PLE Aggregate Log Change Notices: Not Supported 00:08:59.974 LBA Status Info Alert Notices: Not Supported 00:08:59.974 EGE Aggregate Log Change Notices: Not Supported 00:08:59.974 Normal NVM Subsystem Shutdown event: Not Supported 00:08:59.974 Zone Descriptor Change Notices: Not Supported 00:08:59.974 Discovery Log Change Notices: Not Supported 00:08:59.974 Controller Attributes 00:08:59.974 128-bit Host Identifier: Not Supported 00:08:59.974 Non-Operational Permissive Mode: Not Supported 00:08:59.974 NVM Sets: Not Supported 00:08:59.974 Read Recovery Levels: Not Supported 00:08:59.974 Endurance Groups: Not Supported 00:08:59.974 Predictable Latency Mode: Not Supported 00:08:59.974 Traffic Based Keep ALive: Not Supported 00:08:59.974 Namespace Granularity: Not Supported 00:08:59.974 SQ Associations: Not Supported 00:08:59.974 UUID List: Not Supported 00:08:59.974 Multi-Domain Subsystem: Not Supported 00:08:59.974 Fixed Capacity Management: Not Supported 00:08:59.974 Variable Capacity Management: Not Supported 00:08:59.974 Delete Endurance Group: Not Supported 00:08:59.974 Delete NVM Set: Not Supported 00:08:59.974 Extended LBA Formats Supported: Supported 00:08:59.974 Flexible Data Placement Supported: Not Supported 00:08:59.974 00:08:59.974 Controller Memory Buffer Support 00:08:59.974 ================================ 00:08:59.974 Supported: No 00:08:59.974 00:08:59.974 Persistent Memory Region Support 00:08:59.974 ================================ 00:08:59.974 Supported: No 00:08:59.974 00:08:59.974 Admin Command Set Attributes 00:08:59.974 ============================ 00:08:59.974 Security Send/Receive: Not Supported 00:08:59.974 Format NVM: Supported 00:08:59.974 Firmware Activate/Download: Not Supported 00:08:59.974 Namespace Management: Supported 00:08:59.974 Device Self-Test: Not Supported 00:08:59.974 Directives: Supported 00:08:59.974 NVMe-MI: Not Supported 00:08:59.974 Virtualization Management: Not Supported 00:08:59.974 Doorbell Buffer Config: Supported 00:08:59.974 Get LBA Status Capability: Not Supported 00:08:59.974 Command & Feature Lockdown Capability: Not Supported 00:08:59.974 Abort Command Limit: 4 00:08:59.974 Async Event Request Limit: 4 00:08:59.974 Number of Firmware Slots: N/A 00:08:59.974 Firmware Slot 1 Read-Only: N/A 00:08:59.974 Firmware Activation Without Reset: N/A 00:08:59.974 Multiple Update Detection Support: N/A 00:08:59.974 Firmware Update Granularity: No Information Provided 00:08:59.974 Per-Namespace SMART Log: Yes 00:08:59.974 Asymmetric Namespace Access Log Page: Not Supported 00:08:59.974 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:59.974 Command Effects Log Page: Supported 00:08:59.974 Get Log Page Extended Data: Supported 00:08:59.974 Telemetry Log Pages: Not Supported 00:08:59.975 Persistent Event Log Pages: Not Supported 00:08:59.975 Supported Log Pages Log Page: May Support 00:08:59.975 Commands Supported & Effects Log Page: Not Supported 00:08:59.975 Feature Identifiers & Effects Log Page:May Support 00:08:59.975 NVMe-MI Commands & Effects Log Page: May Support 00:08:59.975 Data Area 4 for Telemetry Log: Not Supported 00:08:59.975 Error Log Page Entries Supported: 1 00:08:59.975 Keep Alive: Not Supported 00:08:59.975 00:08:59.975 NVM Command Set Attributes 00:08:59.975 ========================== 00:08:59.975 Submission Queue Entry Size 00:08:59.975 Max: 64 00:08:59.975 Min: 64 00:08:59.975 Completion Queue Entry Size 00:08:59.975 Max: 16 00:08:59.975 Min: 16 00:08:59.975 Number of Namespaces: 256 00:08:59.975 Compare Command: Supported 00:08:59.975 Write Uncorrectable Command: Not Supported 00:08:59.975 Dataset Management Command: Supported 00:08:59.975 Write Zeroes Command: Supported 00:08:59.975 Set Features Save Field: Supported 00:08:59.975 Reservations: Not Supported 00:08:59.975 Timestamp: Supported 00:08:59.975 Copy: Supported 00:08:59.975 Volatile Write Cache: Present 00:08:59.975 Atomic Write Unit (Normal): 1 00:08:59.975 Atomic Write Unit (PFail): 1 00:08:59.975 Atomic Compare & Write Unit: 1 00:08:59.975 Fused Compare & Write: Not Supported 00:08:59.975 Scatter-Gather List 00:08:59.975 SGL Command Set: Supported 00:08:59.975 SGL Keyed: Not Supported 00:08:59.975 SGL Bit Bucket Descriptor: Not Supported 00:08:59.975 SGL Metadata Pointer: Not Supported 00:08:59.975 Oversized SGL: Not Supported 00:08:59.975 SGL Metadata Address: Not Supported 00:08:59.975 SGL Offset: Not Supported 00:08:59.975 Transport SGL Data Block: Not Supported 00:08:59.975 Replay Protected Memory Block: Not Supported 00:08:59.975 00:08:59.975 Firmware Slot Information 00:08:59.975 ========================= 00:08:59.975 Active slot: 1 00:08:59.975 Slot 1 Firmware Revision: 1.0 00:08:59.975 00:08:59.975 00:08:59.975 Commands Supported and Effects 00:08:59.975 ============================== 00:08:59.975 Admin Commands 00:08:59.975 -------------- 00:08:59.975 Delete I/O Submission Queue (00h): Supported 00:08:59.975 Create I/O Submission Queue (01h): Supported 00:08:59.975 Get Log Page (02h): Supported 00:08:59.975 Delete I/O Completion Queue (04h): Supported 00:08:59.975 Create I/O Completion Queue (05h): Supported 00:08:59.975 Identify (06h): Supported 00:08:59.975 Abort (08h): Supported 00:08:59.975 Set Features (09h): Supported 00:08:59.975 Get Features (0Ah): Supported 00:08:59.975 Asynchronous Event Request (0Ch): Supported 00:08:59.975 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:59.975 Directive Send (19h): Supported 00:08:59.975 Directive Receive (1Ah): Supported 00:08:59.975 Virtualization Management (1Ch): Supported 00:08:59.975 Doorbell Buffer Config (7Ch): Supported 00:08:59.975 Format NVM (80h): Supported LBA-Change 00:08:59.975 I/O Commands 00:08:59.975 ------------ 00:08:59.975 Flush (00h): Supported LBA-Change 00:08:59.975 Write (01h): Supported LBA-Change 00:08:59.975 Read (02h): Supported 00:08:59.975 Compare (05h): Supported 00:08:59.975 Write Zeroes (08h): Supported LBA-Change 00:08:59.975 Dataset Management (09h): Supported LBA-Change 00:08:59.975 Unknown (0Ch): Supported 00:08:59.975 Unknown (12h): Supported 00:08:59.975 Copy (19h): Supported LBA-Change 00:08:59.975 Unknown (1Dh): Supported LBA-Change 00:08:59.975 00:08:59.975 Error Log 00:08:59.975 ========= 00:08:59.975 00:08:59.975 Arbitration 00:08:59.975 =========== 00:08:59.975 Arbitration Burst: no limit 00:08:59.975 00:08:59.975 Power Management 00:08:59.975 ================ 00:08:59.975 Number of Power States: 1 00:08:59.975 Current Power State: Power State #0 00:08:59.975 Power State #0: 00:08:59.975 Max Power: 25.00 W 00:08:59.975 Non-Operational State: Operational 00:08:59.975 Entry Latency: 16 microseconds 00:08:59.975 Exit Latency: 4 microseconds 00:08:59.975 Relative Read Throughput: 0 00:08:59.975 Relative Read Latency: 0 00:08:59.975 Relative Write Throughput: 0 00:08:59.975 Relative Write Latency: 0 00:08:59.975 Idle Power: Not Reported 00:08:59.975 Active Power: Not Reported 00:08:59.975 Non-Operational Permissive Mode: Not Supported 00:08:59.975 00:08:59.975 Health Information 00:08:59.975 ================== 00:08:59.975 Critical Warnings: 00:08:59.975 Available Spare Space: OK 00:08:59.975 Temperature: OK 00:08:59.975 Device Reliability: OK 00:08:59.975 Read Only: No 00:08:59.975 Volatile Memory Backup: OK 00:08:59.975 Current Temperature: 323 Kelvin (50 Celsius) 00:08:59.975 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:59.975 Available Spare: 0% 00:08:59.975 Available Spare Threshold: 0% 00:08:59.975 Life Percentage Used: 0% 00:08:59.975 Data Units Read: 2252 00:08:59.975 Data Units Written: 1932 00:08:59.975 Host Read Commands: 101698 00:08:59.975 Host Write Commands: 97472 00:08:59.975 Controller Busy Time: 0 minutes 00:08:59.975 Power Cycles: 0 00:08:59.975 Power On Hours: 0 hours 00:08:59.975 Unsafe Shutdowns: 0 00:08:59.975 Unrecoverable Media Errors: 0 00:08:59.975 Lifetime Error Log Entries: 0 00:08:59.975 Warning Temperature Time: 0 minutes 00:08:59.975 Critical Temperature Time: 0 minutes 00:08:59.975 00:08:59.975 Number of Queues 00:08:59.975 ================ 00:08:59.975 Number of I/O Submission Queues: 64 00:08:59.975 Number of I/O Completion Queues: 64 00:08:59.975 00:08:59.975 ZNS Specific Controller Data 00:08:59.975 ============================ 00:08:59.975 Zone Append Size Limit: 0 00:08:59.975 00:08:59.975 00:08:59.975 Active Namespaces 00:08:59.975 ================= 00:08:59.975 Namespace ID:1 00:08:59.975 Error Recovery Timeout: Unlimited 00:08:59.975 Command Set Identifier: NVM (00h) 00:08:59.975 Deallocate: Supported 00:08:59.975 Deallocated/Unwritten Error: Supported 00:08:59.975 Deallocated Read Value: All 0x00 00:08:59.975 Deallocate in Write Zeroes: Not Supported 00:08:59.975 Deallocated Guard Field: 0xFFFF 00:08:59.975 Flush: Supported 00:08:59.975 Reservation: Not Supported 00:08:59.975 Namespace Sharing Capabilities: Private 00:08:59.975 Size (in LBAs): 1048576 (4GiB) 00:08:59.975 Capacity (in LBAs): 1048576 (4GiB) 00:08:59.975 Utilization (in LBAs): 1048576 (4GiB) 00:08:59.975 Thin Provisioning: Not Supported 00:08:59.975 Per-NS Atomic Units: No 00:08:59.975 Maximum Single Source Range Length: 128 00:08:59.975 Maximum Copy Length: 128 00:08:59.975 Maximum Source Range Count: 128 00:08:59.975 NGUID/EUI64 Never Reused: No 00:08:59.975 Namespace Write Protected: No 00:08:59.975 Number of LBA Formats: 8 00:08:59.975 Current LBA Format: LBA Format #04 00:08:59.975 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:59.975 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:59.975 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:59.975 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:59.975 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:59.975 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:59.975 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:59.975 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:59.975 00:08:59.975 NVM Specific Namespace Data 00:08:59.975 =========================== 00:08:59.975 Logical Block Storage Tag Mask: 0 00:08:59.975 Protection Information Capabilities: 00:08:59.975 16b Guard Protection Information Storage Tag Support: No 00:08:59.975 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:59.975 Storage Tag Check Read Support: No 00:08:59.975 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.975 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.975 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.975 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.975 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.975 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.975 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.975 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.975 Namespace ID:2 00:08:59.975 Error Recovery Timeout: Unlimited 00:08:59.975 Command Set Identifier: NVM (00h) 00:08:59.975 Deallocate: Supported 00:08:59.975 Deallocated/Unwritten Error: Supported 00:08:59.975 Deallocated Read Value: All 0x00 00:08:59.975 Deallocate in Write Zeroes: Not Supported 00:08:59.975 Deallocated Guard Field: 0xFFFF 00:08:59.975 Flush: Supported 00:08:59.975 Reservation: Not Supported 00:08:59.975 Namespace Sharing Capabilities: Private 00:08:59.975 Size (in LBAs): 1048576 (4GiB) 00:08:59.975 Capacity (in LBAs): 1048576 (4GiB) 00:08:59.975 Utilization (in LBAs): 1048576 (4GiB) 00:08:59.975 Thin Provisioning: Not Supported 00:08:59.975 Per-NS Atomic Units: No 00:08:59.975 Maximum Single Source Range Length: 128 00:08:59.975 Maximum Copy Length: 128 00:08:59.975 Maximum Source Range Count: 128 00:08:59.975 NGUID/EUI64 Never Reused: No 00:08:59.975 Namespace Write Protected: No 00:08:59.975 Number of LBA Formats: 8 00:08:59.975 Current LBA Format: LBA Format #04 00:08:59.975 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:59.975 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:59.975 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:59.976 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:59.976 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:59.976 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:59.976 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:59.976 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:59.976 00:08:59.976 NVM Specific Namespace Data 00:08:59.976 =========================== 00:08:59.976 Logical Block Storage Tag Mask: 0 00:08:59.976 Protection Information Capabilities: 00:08:59.976 16b Guard Protection Information Storage Tag Support: No 00:08:59.976 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:59.976 Storage Tag Check Read Support: No 00:08:59.976 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.976 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.976 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.976 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.976 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.976 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.976 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.976 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:59.976 Namespace ID:3 00:08:59.976 Error Recovery Timeout: Unlimited 00:08:59.976 Command Set Identifier: NVM (00h) 00:08:59.976 Deallocate: Supported 00:08:59.976 Deallocated/Unwritten Error: Supported 00:08:59.976 Deallocated Read Value: All 0x00 00:08:59.976 Deallocate in Write Zeroes: Not Supported 00:08:59.976 Deallocated Guard Field: 0xFFFF 00:08:59.976 Flush: Supported 00:08:59.976 Reservation: Not Supported 00:08:59.976 Namespace Sharing Capabilities: Private 00:08:59.976 Size (in LBAs): 1048576 (4GiB) 00:09:00.235 Capacity (in LBAs): 1048576 (4GiB) 00:09:00.235 Utilization (in LBAs): 1048576 (4GiB) 00:09:00.235 Thin Provisioning: Not Supported 00:09:00.235 Per-NS Atomic Units: No 00:09:00.235 Maximum Single Source Range Length: 128 00:09:00.235 Maximum Copy Length: 128 00:09:00.235 Maximum Source Range Count: 128 00:09:00.235 NGUID/EUI64 Never Reused: No 00:09:00.235 Namespace Write Protected: No 00:09:00.235 Number of LBA Formats: 8 00:09:00.235 Current LBA Format: LBA Format #04 00:09:00.235 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:00.235 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:00.235 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:00.235 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:00.235 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:00.235 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:00.235 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:00.235 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:00.235 00:09:00.235 NVM Specific Namespace Data 00:09:00.235 =========================== 00:09:00.235 Logical Block Storage Tag Mask: 0 00:09:00.235 Protection Information Capabilities: 00:09:00.235 16b Guard Protection Information Storage Tag Support: No 00:09:00.235 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:00.235 Storage Tag Check Read Support: No 00:09:00.236 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.236 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.236 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.236 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.236 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.236 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.236 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.236 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.236 05:55:51 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:00.236 05:55:51 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:09:00.236 ===================================================== 00:09:00.236 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:00.236 ===================================================== 00:09:00.236 Controller Capabilities/Features 00:09:00.236 ================================ 00:09:00.236 Vendor ID: 1b36 00:09:00.236 Subsystem Vendor ID: 1af4 00:09:00.236 Serial Number: 12340 00:09:00.236 Model Number: QEMU NVMe Ctrl 00:09:00.236 Firmware Version: 8.0.0 00:09:00.236 Recommended Arb Burst: 6 00:09:00.236 IEEE OUI Identifier: 00 54 52 00:09:00.236 Multi-path I/O 00:09:00.236 May have multiple subsystem ports: No 00:09:00.236 May have multiple controllers: No 00:09:00.236 Associated with SR-IOV VF: No 00:09:00.236 Max Data Transfer Size: 524288 00:09:00.236 Max Number of Namespaces: 256 00:09:00.236 Max Number of I/O Queues: 64 00:09:00.236 NVMe Specification Version (VS): 1.4 00:09:00.236 NVMe Specification Version (Identify): 1.4 00:09:00.236 Maximum Queue Entries: 2048 00:09:00.236 Contiguous Queues Required: Yes 00:09:00.236 Arbitration Mechanisms Supported 00:09:00.236 Weighted Round Robin: Not Supported 00:09:00.236 Vendor Specific: Not Supported 00:09:00.236 Reset Timeout: 7500 ms 00:09:00.236 Doorbell Stride: 4 bytes 00:09:00.236 NVM Subsystem Reset: Not Supported 00:09:00.236 Command Sets Supported 00:09:00.236 NVM Command Set: Supported 00:09:00.236 Boot Partition: Not Supported 00:09:00.236 Memory Page Size Minimum: 4096 bytes 00:09:00.236 Memory Page Size Maximum: 65536 bytes 00:09:00.236 Persistent Memory Region: Not Supported 00:09:00.236 Optional Asynchronous Events Supported 00:09:00.236 Namespace Attribute Notices: Supported 00:09:00.236 Firmware Activation Notices: Not Supported 00:09:00.236 ANA Change Notices: Not Supported 00:09:00.236 PLE Aggregate Log Change Notices: Not Supported 00:09:00.236 LBA Status Info Alert Notices: Not Supported 00:09:00.236 EGE Aggregate Log Change Notices: Not Supported 00:09:00.236 Normal NVM Subsystem Shutdown event: Not Supported 00:09:00.236 Zone Descriptor Change Notices: Not Supported 00:09:00.236 Discovery Log Change Notices: Not Supported 00:09:00.236 Controller Attributes 00:09:00.236 128-bit Host Identifier: Not Supported 00:09:00.236 Non-Operational Permissive Mode: Not Supported 00:09:00.236 NVM Sets: Not Supported 00:09:00.236 Read Recovery Levels: Not Supported 00:09:00.236 Endurance Groups: Not Supported 00:09:00.236 Predictable Latency Mode: Not Supported 00:09:00.236 Traffic Based Keep ALive: Not Supported 00:09:00.236 Namespace Granularity: Not Supported 00:09:00.236 SQ Associations: Not Supported 00:09:00.236 UUID List: Not Supported 00:09:00.236 Multi-Domain Subsystem: Not Supported 00:09:00.236 Fixed Capacity Management: Not Supported 00:09:00.236 Variable Capacity Management: Not Supported 00:09:00.236 Delete Endurance Group: Not Supported 00:09:00.236 Delete NVM Set: Not Supported 00:09:00.236 Extended LBA Formats Supported: Supported 00:09:00.236 Flexible Data Placement Supported: Not Supported 00:09:00.236 00:09:00.236 Controller Memory Buffer Support 00:09:00.236 ================================ 00:09:00.236 Supported: No 00:09:00.236 00:09:00.236 Persistent Memory Region Support 00:09:00.236 ================================ 00:09:00.236 Supported: No 00:09:00.236 00:09:00.236 Admin Command Set Attributes 00:09:00.236 ============================ 00:09:00.236 Security Send/Receive: Not Supported 00:09:00.236 Format NVM: Supported 00:09:00.236 Firmware Activate/Download: Not Supported 00:09:00.236 Namespace Management: Supported 00:09:00.236 Device Self-Test: Not Supported 00:09:00.236 Directives: Supported 00:09:00.236 NVMe-MI: Not Supported 00:09:00.236 Virtualization Management: Not Supported 00:09:00.236 Doorbell Buffer Config: Supported 00:09:00.236 Get LBA Status Capability: Not Supported 00:09:00.236 Command & Feature Lockdown Capability: Not Supported 00:09:00.236 Abort Command Limit: 4 00:09:00.236 Async Event Request Limit: 4 00:09:00.236 Number of Firmware Slots: N/A 00:09:00.236 Firmware Slot 1 Read-Only: N/A 00:09:00.236 Firmware Activation Without Reset: N/A 00:09:00.236 Multiple Update Detection Support: N/A 00:09:00.236 Firmware Update Granularity: No Information Provided 00:09:00.236 Per-Namespace SMART Log: Yes 00:09:00.236 Asymmetric Namespace Access Log Page: Not Supported 00:09:00.236 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:00.236 Command Effects Log Page: Supported 00:09:00.236 Get Log Page Extended Data: Supported 00:09:00.236 Telemetry Log Pages: Not Supported 00:09:00.236 Persistent Event Log Pages: Not Supported 00:09:00.236 Supported Log Pages Log Page: May Support 00:09:00.236 Commands Supported & Effects Log Page: Not Supported 00:09:00.236 Feature Identifiers & Effects Log Page:May Support 00:09:00.236 NVMe-MI Commands & Effects Log Page: May Support 00:09:00.236 Data Area 4 for Telemetry Log: Not Supported 00:09:00.236 Error Log Page Entries Supported: 1 00:09:00.236 Keep Alive: Not Supported 00:09:00.236 00:09:00.236 NVM Command Set Attributes 00:09:00.236 ========================== 00:09:00.236 Submission Queue Entry Size 00:09:00.236 Max: 64 00:09:00.236 Min: 64 00:09:00.236 Completion Queue Entry Size 00:09:00.236 Max: 16 00:09:00.236 Min: 16 00:09:00.236 Number of Namespaces: 256 00:09:00.236 Compare Command: Supported 00:09:00.236 Write Uncorrectable Command: Not Supported 00:09:00.236 Dataset Management Command: Supported 00:09:00.236 Write Zeroes Command: Supported 00:09:00.236 Set Features Save Field: Supported 00:09:00.236 Reservations: Not Supported 00:09:00.236 Timestamp: Supported 00:09:00.236 Copy: Supported 00:09:00.236 Volatile Write Cache: Present 00:09:00.236 Atomic Write Unit (Normal): 1 00:09:00.236 Atomic Write Unit (PFail): 1 00:09:00.236 Atomic Compare & Write Unit: 1 00:09:00.236 Fused Compare & Write: Not Supported 00:09:00.236 Scatter-Gather List 00:09:00.236 SGL Command Set: Supported 00:09:00.236 SGL Keyed: Not Supported 00:09:00.236 SGL Bit Bucket Descriptor: Not Supported 00:09:00.236 SGL Metadata Pointer: Not Supported 00:09:00.236 Oversized SGL: Not Supported 00:09:00.236 SGL Metadata Address: Not Supported 00:09:00.236 SGL Offset: Not Supported 00:09:00.236 Transport SGL Data Block: Not Supported 00:09:00.236 Replay Protected Memory Block: Not Supported 00:09:00.236 00:09:00.236 Firmware Slot Information 00:09:00.236 ========================= 00:09:00.236 Active slot: 1 00:09:00.236 Slot 1 Firmware Revision: 1.0 00:09:00.236 00:09:00.236 00:09:00.236 Commands Supported and Effects 00:09:00.236 ============================== 00:09:00.236 Admin Commands 00:09:00.236 -------------- 00:09:00.236 Delete I/O Submission Queue (00h): Supported 00:09:00.236 Create I/O Submission Queue (01h): Supported 00:09:00.236 Get Log Page (02h): Supported 00:09:00.236 Delete I/O Completion Queue (04h): Supported 00:09:00.236 Create I/O Completion Queue (05h): Supported 00:09:00.236 Identify (06h): Supported 00:09:00.236 Abort (08h): Supported 00:09:00.236 Set Features (09h): Supported 00:09:00.236 Get Features (0Ah): Supported 00:09:00.236 Asynchronous Event Request (0Ch): Supported 00:09:00.236 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:00.236 Directive Send (19h): Supported 00:09:00.236 Directive Receive (1Ah): Supported 00:09:00.236 Virtualization Management (1Ch): Supported 00:09:00.236 Doorbell Buffer Config (7Ch): Supported 00:09:00.236 Format NVM (80h): Supported LBA-Change 00:09:00.236 I/O Commands 00:09:00.236 ------------ 00:09:00.236 Flush (00h): Supported LBA-Change 00:09:00.236 Write (01h): Supported LBA-Change 00:09:00.236 Read (02h): Supported 00:09:00.236 Compare (05h): Supported 00:09:00.236 Write Zeroes (08h): Supported LBA-Change 00:09:00.236 Dataset Management (09h): Supported LBA-Change 00:09:00.236 Unknown (0Ch): Supported 00:09:00.236 Unknown (12h): Supported 00:09:00.236 Copy (19h): Supported LBA-Change 00:09:00.236 Unknown (1Dh): Supported LBA-Change 00:09:00.236 00:09:00.236 Error Log 00:09:00.236 ========= 00:09:00.236 00:09:00.236 Arbitration 00:09:00.236 =========== 00:09:00.236 Arbitration Burst: no limit 00:09:00.236 00:09:00.236 Power Management 00:09:00.236 ================ 00:09:00.236 Number of Power States: 1 00:09:00.236 Current Power State: Power State #0 00:09:00.236 Power State #0: 00:09:00.236 Max Power: 25.00 W 00:09:00.236 Non-Operational State: Operational 00:09:00.236 Entry Latency: 16 microseconds 00:09:00.236 Exit Latency: 4 microseconds 00:09:00.236 Relative Read Throughput: 0 00:09:00.236 Relative Read Latency: 0 00:09:00.237 Relative Write Throughput: 0 00:09:00.237 Relative Write Latency: 0 00:09:00.496 Idle Power: Not Reported 00:09:00.496 Active Power: Not Reported 00:09:00.496 Non-Operational Permissive Mode: Not Supported 00:09:00.496 00:09:00.496 Health Information 00:09:00.496 ================== 00:09:00.496 Critical Warnings: 00:09:00.496 Available Spare Space: OK 00:09:00.496 Temperature: OK 00:09:00.496 Device Reliability: OK 00:09:00.496 Read Only: No 00:09:00.496 Volatile Memory Backup: OK 00:09:00.496 Current Temperature: 323 Kelvin (50 Celsius) 00:09:00.496 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:00.496 Available Spare: 0% 00:09:00.496 Available Spare Threshold: 0% 00:09:00.496 Life Percentage Used: 0% 00:09:00.496 Data Units Read: 1030 00:09:00.496 Data Units Written: 857 00:09:00.496 Host Read Commands: 48344 00:09:00.496 Host Write Commands: 46786 00:09:00.496 Controller Busy Time: 0 minutes 00:09:00.496 Power Cycles: 0 00:09:00.496 Power On Hours: 0 hours 00:09:00.496 Unsafe Shutdowns: 0 00:09:00.496 Unrecoverable Media Errors: 0 00:09:00.496 Lifetime Error Log Entries: 0 00:09:00.496 Warning Temperature Time: 0 minutes 00:09:00.496 Critical Temperature Time: 0 minutes 00:09:00.496 00:09:00.496 Number of Queues 00:09:00.496 ================ 00:09:00.496 Number of I/O Submission Queues: 64 00:09:00.496 Number of I/O Completion Queues: 64 00:09:00.496 00:09:00.496 ZNS Specific Controller Data 00:09:00.496 ============================ 00:09:00.496 Zone Append Size Limit: 0 00:09:00.496 00:09:00.496 00:09:00.496 Active Namespaces 00:09:00.496 ================= 00:09:00.496 Namespace ID:1 00:09:00.496 Error Recovery Timeout: Unlimited 00:09:00.496 Command Set Identifier: NVM (00h) 00:09:00.496 Deallocate: Supported 00:09:00.496 Deallocated/Unwritten Error: Supported 00:09:00.497 Deallocated Read Value: All 0x00 00:09:00.497 Deallocate in Write Zeroes: Not Supported 00:09:00.497 Deallocated Guard Field: 0xFFFF 00:09:00.497 Flush: Supported 00:09:00.497 Reservation: Not Supported 00:09:00.497 Metadata Transferred as: Separate Metadata Buffer 00:09:00.497 Namespace Sharing Capabilities: Private 00:09:00.497 Size (in LBAs): 1548666 (5GiB) 00:09:00.497 Capacity (in LBAs): 1548666 (5GiB) 00:09:00.497 Utilization (in LBAs): 1548666 (5GiB) 00:09:00.497 Thin Provisioning: Not Supported 00:09:00.497 Per-NS Atomic Units: No 00:09:00.497 Maximum Single Source Range Length: 128 00:09:00.497 Maximum Copy Length: 128 00:09:00.497 Maximum Source Range Count: 128 00:09:00.497 NGUID/EUI64 Never Reused: No 00:09:00.497 Namespace Write Protected: No 00:09:00.497 Number of LBA Formats: 8 00:09:00.497 Current LBA Format: LBA Format #07 00:09:00.497 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:00.497 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:00.497 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:00.497 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:00.497 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:00.497 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:00.497 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:00.497 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:00.497 00:09:00.497 NVM Specific Namespace Data 00:09:00.497 =========================== 00:09:00.497 Logical Block Storage Tag Mask: 0 00:09:00.497 Protection Information Capabilities: 00:09:00.497 16b Guard Protection Information Storage Tag Support: No 00:09:00.497 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:00.497 Storage Tag Check Read Support: No 00:09:00.497 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.497 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.497 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.497 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.497 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.497 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.497 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.497 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.497 05:55:51 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:00.497 05:55:51 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:09:00.497 ===================================================== 00:09:00.497 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:00.497 ===================================================== 00:09:00.497 Controller Capabilities/Features 00:09:00.497 ================================ 00:09:00.497 Vendor ID: 1b36 00:09:00.497 Subsystem Vendor ID: 1af4 00:09:00.497 Serial Number: 12341 00:09:00.497 Model Number: QEMU NVMe Ctrl 00:09:00.497 Firmware Version: 8.0.0 00:09:00.497 Recommended Arb Burst: 6 00:09:00.497 IEEE OUI Identifier: 00 54 52 00:09:00.497 Multi-path I/O 00:09:00.497 May have multiple subsystem ports: No 00:09:00.497 May have multiple controllers: No 00:09:00.497 Associated with SR-IOV VF: No 00:09:00.497 Max Data Transfer Size: 524288 00:09:00.497 Max Number of Namespaces: 256 00:09:00.497 Max Number of I/O Queues: 64 00:09:00.497 NVMe Specification Version (VS): 1.4 00:09:00.497 NVMe Specification Version (Identify): 1.4 00:09:00.497 Maximum Queue Entries: 2048 00:09:00.497 Contiguous Queues Required: Yes 00:09:00.497 Arbitration Mechanisms Supported 00:09:00.497 Weighted Round Robin: Not Supported 00:09:00.497 Vendor Specific: Not Supported 00:09:00.497 Reset Timeout: 7500 ms 00:09:00.497 Doorbell Stride: 4 bytes 00:09:00.497 NVM Subsystem Reset: Not Supported 00:09:00.497 Command Sets Supported 00:09:00.497 NVM Command Set: Supported 00:09:00.497 Boot Partition: Not Supported 00:09:00.497 Memory Page Size Minimum: 4096 bytes 00:09:00.497 Memory Page Size Maximum: 65536 bytes 00:09:00.497 Persistent Memory Region: Not Supported 00:09:00.497 Optional Asynchronous Events Supported 00:09:00.497 Namespace Attribute Notices: Supported 00:09:00.497 Firmware Activation Notices: Not Supported 00:09:00.497 ANA Change Notices: Not Supported 00:09:00.497 PLE Aggregate Log Change Notices: Not Supported 00:09:00.497 LBA Status Info Alert Notices: Not Supported 00:09:00.497 EGE Aggregate Log Change Notices: Not Supported 00:09:00.497 Normal NVM Subsystem Shutdown event: Not Supported 00:09:00.497 Zone Descriptor Change Notices: Not Supported 00:09:00.497 Discovery Log Change Notices: Not Supported 00:09:00.497 Controller Attributes 00:09:00.497 128-bit Host Identifier: Not Supported 00:09:00.497 Non-Operational Permissive Mode: Not Supported 00:09:00.497 NVM Sets: Not Supported 00:09:00.497 Read Recovery Levels: Not Supported 00:09:00.497 Endurance Groups: Not Supported 00:09:00.497 Predictable Latency Mode: Not Supported 00:09:00.497 Traffic Based Keep ALive: Not Supported 00:09:00.497 Namespace Granularity: Not Supported 00:09:00.497 SQ Associations: Not Supported 00:09:00.497 UUID List: Not Supported 00:09:00.497 Multi-Domain Subsystem: Not Supported 00:09:00.497 Fixed Capacity Management: Not Supported 00:09:00.497 Variable Capacity Management: Not Supported 00:09:00.497 Delete Endurance Group: Not Supported 00:09:00.497 Delete NVM Set: Not Supported 00:09:00.497 Extended LBA Formats Supported: Supported 00:09:00.497 Flexible Data Placement Supported: Not Supported 00:09:00.497 00:09:00.497 Controller Memory Buffer Support 00:09:00.497 ================================ 00:09:00.497 Supported: No 00:09:00.497 00:09:00.497 Persistent Memory Region Support 00:09:00.497 ================================ 00:09:00.497 Supported: No 00:09:00.497 00:09:00.497 Admin Command Set Attributes 00:09:00.497 ============================ 00:09:00.497 Security Send/Receive: Not Supported 00:09:00.497 Format NVM: Supported 00:09:00.497 Firmware Activate/Download: Not Supported 00:09:00.497 Namespace Management: Supported 00:09:00.497 Device Self-Test: Not Supported 00:09:00.497 Directives: Supported 00:09:00.497 NVMe-MI: Not Supported 00:09:00.497 Virtualization Management: Not Supported 00:09:00.497 Doorbell Buffer Config: Supported 00:09:00.497 Get LBA Status Capability: Not Supported 00:09:00.497 Command & Feature Lockdown Capability: Not Supported 00:09:00.497 Abort Command Limit: 4 00:09:00.497 Async Event Request Limit: 4 00:09:00.497 Number of Firmware Slots: N/A 00:09:00.497 Firmware Slot 1 Read-Only: N/A 00:09:00.497 Firmware Activation Without Reset: N/A 00:09:00.497 Multiple Update Detection Support: N/A 00:09:00.497 Firmware Update Granularity: No Information Provided 00:09:00.497 Per-Namespace SMART Log: Yes 00:09:00.497 Asymmetric Namespace Access Log Page: Not Supported 00:09:00.497 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:00.497 Command Effects Log Page: Supported 00:09:00.497 Get Log Page Extended Data: Supported 00:09:00.497 Telemetry Log Pages: Not Supported 00:09:00.498 Persistent Event Log Pages: Not Supported 00:09:00.498 Supported Log Pages Log Page: May Support 00:09:00.498 Commands Supported & Effects Log Page: Not Supported 00:09:00.498 Feature Identifiers & Effects Log Page:May Support 00:09:00.498 NVMe-MI Commands & Effects Log Page: May Support 00:09:00.498 Data Area 4 for Telemetry Log: Not Supported 00:09:00.498 Error Log Page Entries Supported: 1 00:09:00.498 Keep Alive: Not Supported 00:09:00.498 00:09:00.498 NVM Command Set Attributes 00:09:00.498 ========================== 00:09:00.498 Submission Queue Entry Size 00:09:00.498 Max: 64 00:09:00.498 Min: 64 00:09:00.498 Completion Queue Entry Size 00:09:00.498 Max: 16 00:09:00.498 Min: 16 00:09:00.498 Number of Namespaces: 256 00:09:00.498 Compare Command: Supported 00:09:00.498 Write Uncorrectable Command: Not Supported 00:09:00.498 Dataset Management Command: Supported 00:09:00.498 Write Zeroes Command: Supported 00:09:00.498 Set Features Save Field: Supported 00:09:00.498 Reservations: Not Supported 00:09:00.498 Timestamp: Supported 00:09:00.498 Copy: Supported 00:09:00.498 Volatile Write Cache: Present 00:09:00.498 Atomic Write Unit (Normal): 1 00:09:00.498 Atomic Write Unit (PFail): 1 00:09:00.498 Atomic Compare & Write Unit: 1 00:09:00.498 Fused Compare & Write: Not Supported 00:09:00.498 Scatter-Gather List 00:09:00.498 SGL Command Set: Supported 00:09:00.498 SGL Keyed: Not Supported 00:09:00.498 SGL Bit Bucket Descriptor: Not Supported 00:09:00.498 SGL Metadata Pointer: Not Supported 00:09:00.498 Oversized SGL: Not Supported 00:09:00.498 SGL Metadata Address: Not Supported 00:09:00.498 SGL Offset: Not Supported 00:09:00.498 Transport SGL Data Block: Not Supported 00:09:00.498 Replay Protected Memory Block: Not Supported 00:09:00.498 00:09:00.498 Firmware Slot Information 00:09:00.498 ========================= 00:09:00.498 Active slot: 1 00:09:00.498 Slot 1 Firmware Revision: 1.0 00:09:00.498 00:09:00.498 00:09:00.498 Commands Supported and Effects 00:09:00.498 ============================== 00:09:00.498 Admin Commands 00:09:00.498 -------------- 00:09:00.498 Delete I/O Submission Queue (00h): Supported 00:09:00.498 Create I/O Submission Queue (01h): Supported 00:09:00.498 Get Log Page (02h): Supported 00:09:00.498 Delete I/O Completion Queue (04h): Supported 00:09:00.498 Create I/O Completion Queue (05h): Supported 00:09:00.498 Identify (06h): Supported 00:09:00.498 Abort (08h): Supported 00:09:00.498 Set Features (09h): Supported 00:09:00.498 Get Features (0Ah): Supported 00:09:00.498 Asynchronous Event Request (0Ch): Supported 00:09:00.498 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:00.498 Directive Send (19h): Supported 00:09:00.498 Directive Receive (1Ah): Supported 00:09:00.498 Virtualization Management (1Ch): Supported 00:09:00.498 Doorbell Buffer Config (7Ch): Supported 00:09:00.498 Format NVM (80h): Supported LBA-Change 00:09:00.498 I/O Commands 00:09:00.498 ------------ 00:09:00.498 Flush (00h): Supported LBA-Change 00:09:00.498 Write (01h): Supported LBA-Change 00:09:00.498 Read (02h): Supported 00:09:00.498 Compare (05h): Supported 00:09:00.498 Write Zeroes (08h): Supported LBA-Change 00:09:00.498 Dataset Management (09h): Supported LBA-Change 00:09:00.498 Unknown (0Ch): Supported 00:09:00.498 Unknown (12h): Supported 00:09:00.498 Copy (19h): Supported LBA-Change 00:09:00.498 Unknown (1Dh): Supported LBA-Change 00:09:00.498 00:09:00.498 Error Log 00:09:00.498 ========= 00:09:00.498 00:09:00.498 Arbitration 00:09:00.498 =========== 00:09:00.498 Arbitration Burst: no limit 00:09:00.498 00:09:00.498 Power Management 00:09:00.498 ================ 00:09:00.498 Number of Power States: 1 00:09:00.498 Current Power State: Power State #0 00:09:00.498 Power State #0: 00:09:00.498 Max Power: 25.00 W 00:09:00.498 Non-Operational State: Operational 00:09:00.498 Entry Latency: 16 microseconds 00:09:00.498 Exit Latency: 4 microseconds 00:09:00.498 Relative Read Throughput: 0 00:09:00.498 Relative Read Latency: 0 00:09:00.498 Relative Write Throughput: 0 00:09:00.498 Relative Write Latency: 0 00:09:00.761 Idle Power: Not Reported 00:09:00.761 Active Power: Not Reported 00:09:00.761 Non-Operational Permissive Mode: Not Supported 00:09:00.761 00:09:00.761 Health Information 00:09:00.761 ================== 00:09:00.761 Critical Warnings: 00:09:00.761 Available Spare Space: OK 00:09:00.761 Temperature: OK 00:09:00.761 Device Reliability: OK 00:09:00.761 Read Only: No 00:09:00.761 Volatile Memory Backup: OK 00:09:00.761 Current Temperature: 323 Kelvin (50 Celsius) 00:09:00.761 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:00.761 Available Spare: 0% 00:09:00.761 Available Spare Threshold: 0% 00:09:00.761 Life Percentage Used: 0% 00:09:00.761 Data Units Read: 749 00:09:00.761 Data Units Written: 597 00:09:00.761 Host Read Commands: 34388 00:09:00.761 Host Write Commands: 32070 00:09:00.761 Controller Busy Time: 0 minutes 00:09:00.761 Power Cycles: 0 00:09:00.761 Power On Hours: 0 hours 00:09:00.761 Unsafe Shutdowns: 0 00:09:00.761 Unrecoverable Media Errors: 0 00:09:00.761 Lifetime Error Log Entries: 0 00:09:00.761 Warning Temperature Time: 0 minutes 00:09:00.761 Critical Temperature Time: 0 minutes 00:09:00.761 00:09:00.761 Number of Queues 00:09:00.761 ================ 00:09:00.761 Number of I/O Submission Queues: 64 00:09:00.761 Number of I/O Completion Queues: 64 00:09:00.761 00:09:00.761 ZNS Specific Controller Data 00:09:00.761 ============================ 00:09:00.761 Zone Append Size Limit: 0 00:09:00.761 00:09:00.761 00:09:00.761 Active Namespaces 00:09:00.761 ================= 00:09:00.761 Namespace ID:1 00:09:00.761 Error Recovery Timeout: Unlimited 00:09:00.761 Command Set Identifier: NVM (00h) 00:09:00.761 Deallocate: Supported 00:09:00.761 Deallocated/Unwritten Error: Supported 00:09:00.761 Deallocated Read Value: All 0x00 00:09:00.761 Deallocate in Write Zeroes: Not Supported 00:09:00.761 Deallocated Guard Field: 0xFFFF 00:09:00.761 Flush: Supported 00:09:00.761 Reservation: Not Supported 00:09:00.761 Namespace Sharing Capabilities: Private 00:09:00.761 Size (in LBAs): 1310720 (5GiB) 00:09:00.761 Capacity (in LBAs): 1310720 (5GiB) 00:09:00.761 Utilization (in LBAs): 1310720 (5GiB) 00:09:00.761 Thin Provisioning: Not Supported 00:09:00.761 Per-NS Atomic Units: No 00:09:00.761 Maximum Single Source Range Length: 128 00:09:00.761 Maximum Copy Length: 128 00:09:00.761 Maximum Source Range Count: 128 00:09:00.761 NGUID/EUI64 Never Reused: No 00:09:00.761 Namespace Write Protected: No 00:09:00.761 Number of LBA Formats: 8 00:09:00.761 Current LBA Format: LBA Format #04 00:09:00.761 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:00.761 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:00.761 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:00.761 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:00.761 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:00.761 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:00.761 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:00.761 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:00.761 00:09:00.761 NVM Specific Namespace Data 00:09:00.761 =========================== 00:09:00.761 Logical Block Storage Tag Mask: 0 00:09:00.761 Protection Information Capabilities: 00:09:00.761 16b Guard Protection Information Storage Tag Support: No 00:09:00.761 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:00.761 Storage Tag Check Read Support: No 00:09:00.761 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.761 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.761 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.761 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.761 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.761 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.761 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.761 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.761 05:55:52 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:00.761 05:55:52 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:09:00.761 ===================================================== 00:09:00.761 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:00.761 ===================================================== 00:09:00.761 Controller Capabilities/Features 00:09:00.761 ================================ 00:09:00.761 Vendor ID: 1b36 00:09:00.761 Subsystem Vendor ID: 1af4 00:09:00.761 Serial Number: 12342 00:09:00.761 Model Number: QEMU NVMe Ctrl 00:09:00.761 Firmware Version: 8.0.0 00:09:00.761 Recommended Arb Burst: 6 00:09:00.761 IEEE OUI Identifier: 00 54 52 00:09:00.761 Multi-path I/O 00:09:00.761 May have multiple subsystem ports: No 00:09:00.761 May have multiple controllers: No 00:09:00.761 Associated with SR-IOV VF: No 00:09:00.761 Max Data Transfer Size: 524288 00:09:00.761 Max Number of Namespaces: 256 00:09:00.761 Max Number of I/O Queues: 64 00:09:00.761 NVMe Specification Version (VS): 1.4 00:09:00.761 NVMe Specification Version (Identify): 1.4 00:09:00.761 Maximum Queue Entries: 2048 00:09:00.761 Contiguous Queues Required: Yes 00:09:00.761 Arbitration Mechanisms Supported 00:09:00.761 Weighted Round Robin: Not Supported 00:09:00.761 Vendor Specific: Not Supported 00:09:00.761 Reset Timeout: 7500 ms 00:09:00.761 Doorbell Stride: 4 bytes 00:09:00.761 NVM Subsystem Reset: Not Supported 00:09:00.761 Command Sets Supported 00:09:00.761 NVM Command Set: Supported 00:09:00.761 Boot Partition: Not Supported 00:09:00.761 Memory Page Size Minimum: 4096 bytes 00:09:00.761 Memory Page Size Maximum: 65536 bytes 00:09:00.761 Persistent Memory Region: Not Supported 00:09:00.761 Optional Asynchronous Events Supported 00:09:00.761 Namespace Attribute Notices: Supported 00:09:00.761 Firmware Activation Notices: Not Supported 00:09:00.761 ANA Change Notices: Not Supported 00:09:00.761 PLE Aggregate Log Change Notices: Not Supported 00:09:00.761 LBA Status Info Alert Notices: Not Supported 00:09:00.761 EGE Aggregate Log Change Notices: Not Supported 00:09:00.762 Normal NVM Subsystem Shutdown event: Not Supported 00:09:00.762 Zone Descriptor Change Notices: Not Supported 00:09:00.762 Discovery Log Change Notices: Not Supported 00:09:00.762 Controller Attributes 00:09:00.762 128-bit Host Identifier: Not Supported 00:09:00.762 Non-Operational Permissive Mode: Not Supported 00:09:00.762 NVM Sets: Not Supported 00:09:00.762 Read Recovery Levels: Not Supported 00:09:00.762 Endurance Groups: Not Supported 00:09:00.762 Predictable Latency Mode: Not Supported 00:09:00.762 Traffic Based Keep ALive: Not Supported 00:09:00.762 Namespace Granularity: Not Supported 00:09:00.762 SQ Associations: Not Supported 00:09:00.762 UUID List: Not Supported 00:09:00.762 Multi-Domain Subsystem: Not Supported 00:09:00.762 Fixed Capacity Management: Not Supported 00:09:00.762 Variable Capacity Management: Not Supported 00:09:00.762 Delete Endurance Group: Not Supported 00:09:00.762 Delete NVM Set: Not Supported 00:09:00.762 Extended LBA Formats Supported: Supported 00:09:00.762 Flexible Data Placement Supported: Not Supported 00:09:00.762 00:09:00.762 Controller Memory Buffer Support 00:09:00.762 ================================ 00:09:00.762 Supported: No 00:09:00.762 00:09:00.762 Persistent Memory Region Support 00:09:00.762 ================================ 00:09:00.762 Supported: No 00:09:00.762 00:09:00.762 Admin Command Set Attributes 00:09:00.762 ============================ 00:09:00.762 Security Send/Receive: Not Supported 00:09:00.762 Format NVM: Supported 00:09:00.762 Firmware Activate/Download: Not Supported 00:09:00.762 Namespace Management: Supported 00:09:00.762 Device Self-Test: Not Supported 00:09:00.762 Directives: Supported 00:09:00.762 NVMe-MI: Not Supported 00:09:00.762 Virtualization Management: Not Supported 00:09:00.762 Doorbell Buffer Config: Supported 00:09:00.762 Get LBA Status Capability: Not Supported 00:09:00.762 Command & Feature Lockdown Capability: Not Supported 00:09:00.762 Abort Command Limit: 4 00:09:00.762 Async Event Request Limit: 4 00:09:00.762 Number of Firmware Slots: N/A 00:09:00.762 Firmware Slot 1 Read-Only: N/A 00:09:00.762 Firmware Activation Without Reset: N/A 00:09:00.762 Multiple Update Detection Support: N/A 00:09:00.762 Firmware Update Granularity: No Information Provided 00:09:00.762 Per-Namespace SMART Log: Yes 00:09:00.762 Asymmetric Namespace Access Log Page: Not Supported 00:09:00.762 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:00.762 Command Effects Log Page: Supported 00:09:00.762 Get Log Page Extended Data: Supported 00:09:00.762 Telemetry Log Pages: Not Supported 00:09:00.762 Persistent Event Log Pages: Not Supported 00:09:00.762 Supported Log Pages Log Page: May Support 00:09:00.762 Commands Supported & Effects Log Page: Not Supported 00:09:00.762 Feature Identifiers & Effects Log Page:May Support 00:09:00.762 NVMe-MI Commands & Effects Log Page: May Support 00:09:00.762 Data Area 4 for Telemetry Log: Not Supported 00:09:00.762 Error Log Page Entries Supported: 1 00:09:00.762 Keep Alive: Not Supported 00:09:00.762 00:09:00.762 NVM Command Set Attributes 00:09:00.762 ========================== 00:09:00.762 Submission Queue Entry Size 00:09:00.762 Max: 64 00:09:00.762 Min: 64 00:09:00.762 Completion Queue Entry Size 00:09:00.762 Max: 16 00:09:00.762 Min: 16 00:09:00.762 Number of Namespaces: 256 00:09:00.762 Compare Command: Supported 00:09:00.762 Write Uncorrectable Command: Not Supported 00:09:00.762 Dataset Management Command: Supported 00:09:00.762 Write Zeroes Command: Supported 00:09:00.762 Set Features Save Field: Supported 00:09:00.762 Reservations: Not Supported 00:09:00.762 Timestamp: Supported 00:09:00.762 Copy: Supported 00:09:00.762 Volatile Write Cache: Present 00:09:00.762 Atomic Write Unit (Normal): 1 00:09:00.762 Atomic Write Unit (PFail): 1 00:09:00.762 Atomic Compare & Write Unit: 1 00:09:00.762 Fused Compare & Write: Not Supported 00:09:00.762 Scatter-Gather List 00:09:00.762 SGL Command Set: Supported 00:09:00.762 SGL Keyed: Not Supported 00:09:00.762 SGL Bit Bucket Descriptor: Not Supported 00:09:00.762 SGL Metadata Pointer: Not Supported 00:09:00.762 Oversized SGL: Not Supported 00:09:00.762 SGL Metadata Address: Not Supported 00:09:00.762 SGL Offset: Not Supported 00:09:00.762 Transport SGL Data Block: Not Supported 00:09:00.762 Replay Protected Memory Block: Not Supported 00:09:00.762 00:09:00.762 Firmware Slot Information 00:09:00.762 ========================= 00:09:00.762 Active slot: 1 00:09:00.762 Slot 1 Firmware Revision: 1.0 00:09:00.762 00:09:00.762 00:09:00.762 Commands Supported and Effects 00:09:00.762 ============================== 00:09:00.762 Admin Commands 00:09:00.762 -------------- 00:09:00.762 Delete I/O Submission Queue (00h): Supported 00:09:00.762 Create I/O Submission Queue (01h): Supported 00:09:00.762 Get Log Page (02h): Supported 00:09:00.762 Delete I/O Completion Queue (04h): Supported 00:09:00.762 Create I/O Completion Queue (05h): Supported 00:09:00.762 Identify (06h): Supported 00:09:00.762 Abort (08h): Supported 00:09:00.762 Set Features (09h): Supported 00:09:00.762 Get Features (0Ah): Supported 00:09:00.762 Asynchronous Event Request (0Ch): Supported 00:09:00.762 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:00.762 Directive Send (19h): Supported 00:09:00.762 Directive Receive (1Ah): Supported 00:09:00.762 Virtualization Management (1Ch): Supported 00:09:00.762 Doorbell Buffer Config (7Ch): Supported 00:09:00.762 Format NVM (80h): Supported LBA-Change 00:09:00.762 I/O Commands 00:09:00.762 ------------ 00:09:00.762 Flush (00h): Supported LBA-Change 00:09:00.762 Write (01h): Supported LBA-Change 00:09:00.762 Read (02h): Supported 00:09:00.762 Compare (05h): Supported 00:09:00.762 Write Zeroes (08h): Supported LBA-Change 00:09:00.762 Dataset Management (09h): Supported LBA-Change 00:09:00.762 Unknown (0Ch): Supported 00:09:00.762 Unknown (12h): Supported 00:09:00.762 Copy (19h): Supported LBA-Change 00:09:00.762 Unknown (1Dh): Supported LBA-Change 00:09:00.762 00:09:00.762 Error Log 00:09:00.762 ========= 00:09:00.762 00:09:00.762 Arbitration 00:09:00.762 =========== 00:09:00.762 Arbitration Burst: no limit 00:09:00.762 00:09:00.762 Power Management 00:09:00.762 ================ 00:09:00.762 Number of Power States: 1 00:09:00.762 Current Power State: Power State #0 00:09:00.762 Power State #0: 00:09:00.762 Max Power: 25.00 W 00:09:00.762 Non-Operational State: Operational 00:09:00.762 Entry Latency: 16 microseconds 00:09:00.762 Exit Latency: 4 microseconds 00:09:00.762 Relative Read Throughput: 0 00:09:00.762 Relative Read Latency: 0 00:09:00.762 Relative Write Throughput: 0 00:09:00.762 Relative Write Latency: 0 00:09:00.762 Idle Power: Not Reported 00:09:00.762 Active Power: Not Reported 00:09:00.762 Non-Operational Permissive Mode: Not Supported 00:09:00.762 00:09:00.762 Health Information 00:09:00.762 ================== 00:09:00.763 Critical Warnings: 00:09:00.763 Available Spare Space: OK 00:09:00.763 Temperature: OK 00:09:00.763 Device Reliability: OK 00:09:00.763 Read Only: No 00:09:00.763 Volatile Memory Backup: OK 00:09:00.763 Current Temperature: 323 Kelvin (50 Celsius) 00:09:00.763 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:00.763 Available Spare: 0% 00:09:00.763 Available Spare Threshold: 0% 00:09:00.763 Life Percentage Used: 0% 00:09:00.763 Data Units Read: 2252 00:09:00.763 Data Units Written: 1932 00:09:00.763 Host Read Commands: 101698 00:09:00.763 Host Write Commands: 97472 00:09:00.763 Controller Busy Time: 0 minutes 00:09:00.763 Power Cycles: 0 00:09:00.763 Power On Hours: 0 hours 00:09:00.763 Unsafe Shutdowns: 0 00:09:00.763 Unrecoverable Media Errors: 0 00:09:00.763 Lifetime Error Log Entries: 0 00:09:00.763 Warning Temperature Time: 0 minutes 00:09:00.763 Critical Temperature Time: 0 minutes 00:09:00.763 00:09:00.763 Number of Queues 00:09:00.763 ================ 00:09:00.763 Number of I/O Submission Queues: 64 00:09:00.763 Number of I/O Completion Queues: 64 00:09:00.763 00:09:00.763 ZNS Specific Controller Data 00:09:00.763 ============================ 00:09:00.763 Zone Append Size Limit: 0 00:09:00.763 00:09:00.763 00:09:00.763 Active Namespaces 00:09:00.763 ================= 00:09:00.763 Namespace ID:1 00:09:00.763 Error Recovery Timeout: Unlimited 00:09:00.763 Command Set Identifier: NVM (00h) 00:09:00.763 Deallocate: Supported 00:09:00.763 Deallocated/Unwritten Error: Supported 00:09:00.763 Deallocated Read Value: All 0x00 00:09:00.763 Deallocate in Write Zeroes: Not Supported 00:09:00.763 Deallocated Guard Field: 0xFFFF 00:09:00.763 Flush: Supported 00:09:00.763 Reservation: Not Supported 00:09:00.763 Namespace Sharing Capabilities: Private 00:09:00.763 Size (in LBAs): 1048576 (4GiB) 00:09:00.763 Capacity (in LBAs): 1048576 (4GiB) 00:09:00.763 Utilization (in LBAs): 1048576 (4GiB) 00:09:00.763 Thin Provisioning: Not Supported 00:09:00.763 Per-NS Atomic Units: No 00:09:00.763 Maximum Single Source Range Length: 128 00:09:00.763 Maximum Copy Length: 128 00:09:00.763 Maximum Source Range Count: 128 00:09:00.763 NGUID/EUI64 Never Reused: No 00:09:00.763 Namespace Write Protected: No 00:09:00.763 Number of LBA Formats: 8 00:09:00.763 Current LBA Format: LBA Format #04 00:09:00.763 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:00.763 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:00.763 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:00.763 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:00.763 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:00.763 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:00.763 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:00.763 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:00.763 00:09:00.763 NVM Specific Namespace Data 00:09:00.763 =========================== 00:09:00.763 Logical Block Storage Tag Mask: 0 00:09:00.763 Protection Information Capabilities: 00:09:00.763 16b Guard Protection Information Storage Tag Support: No 00:09:00.763 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:00.763 Storage Tag Check Read Support: No 00:09:00.763 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.763 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.763 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.763 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.763 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.763 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.763 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.763 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.763 Namespace ID:2 00:09:00.763 Error Recovery Timeout: Unlimited 00:09:00.763 Command Set Identifier: NVM (00h) 00:09:00.763 Deallocate: Supported 00:09:00.763 Deallocated/Unwritten Error: Supported 00:09:00.763 Deallocated Read Value: All 0x00 00:09:00.763 Deallocate in Write Zeroes: Not Supported 00:09:00.763 Deallocated Guard Field: 0xFFFF 00:09:00.763 Flush: Supported 00:09:00.763 Reservation: Not Supported 00:09:00.763 Namespace Sharing Capabilities: Private 00:09:00.763 Size (in LBAs): 1048576 (4GiB) 00:09:00.763 Capacity (in LBAs): 1048576 (4GiB) 00:09:00.763 Utilization (in LBAs): 1048576 (4GiB) 00:09:00.763 Thin Provisioning: Not Supported 00:09:00.763 Per-NS Atomic Units: No 00:09:00.763 Maximum Single Source Range Length: 128 00:09:00.763 Maximum Copy Length: 128 00:09:00.763 Maximum Source Range Count: 128 00:09:00.763 NGUID/EUI64 Never Reused: No 00:09:00.763 Namespace Write Protected: No 00:09:00.763 Number of LBA Formats: 8 00:09:00.763 Current LBA Format: LBA Format #04 00:09:00.763 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:00.763 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:00.763 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:00.763 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:00.763 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:00.763 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:00.763 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:00.763 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:00.763 00:09:00.763 NVM Specific Namespace Data 00:09:00.763 =========================== 00:09:00.763 Logical Block Storage Tag Mask: 0 00:09:00.763 Protection Information Capabilities: 00:09:00.763 16b Guard Protection Information Storage Tag Support: No 00:09:00.763 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:00.763 Storage Tag Check Read Support: No 00:09:00.763 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.763 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.763 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.763 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.763 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.763 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.763 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.763 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:00.763 Namespace ID:3 00:09:00.763 Error Recovery Timeout: Unlimited 00:09:00.763 Command Set Identifier: NVM (00h) 00:09:00.763 Deallocate: Supported 00:09:00.763 Deallocated/Unwritten Error: Supported 00:09:00.763 Deallocated Read Value: All 0x00 00:09:00.763 Deallocate in Write Zeroes: Not Supported 00:09:00.763 Deallocated Guard Field: 0xFFFF 00:09:00.764 Flush: Supported 00:09:00.764 Reservation: Not Supported 00:09:00.764 Namespace Sharing Capabilities: Private 00:09:00.764 Size (in LBAs): 1048576 (4GiB) 00:09:00.764 Capacity (in LBAs): 1048576 (4GiB) 00:09:00.764 Utilization (in LBAs): 1048576 (4GiB) 00:09:00.764 Thin Provisioning: Not Supported 00:09:00.764 Per-NS Atomic Units: No 00:09:00.764 Maximum Single Source Range Length: 128 00:09:00.764 Maximum Copy Length: 128 00:09:00.764 Maximum Source Range Count: 128 00:09:00.764 NGUID/EUI64 Never Reused: No 00:09:00.764 Namespace Write Protected: No 00:09:00.764 Number of LBA Formats: 8 00:09:00.764 Current LBA Format: LBA Format #04 00:09:00.764 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:00.764 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:00.764 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:00.764 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:00.764 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:00.764 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:00.764 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:00.764 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:00.764 00:09:00.764 NVM Specific Namespace Data 00:09:00.764 =========================== 00:09:00.764 Logical Block Storage Tag Mask: 0 00:09:00.764 Protection Information Capabilities: 00:09:00.764 16b Guard Protection Information Storage Tag Support: No 00:09:00.764 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:01.035 Storage Tag Check Read Support: No 00:09:01.035 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:01.035 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:01.035 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:01.035 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:01.035 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:01.035 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:01.035 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:01.035 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:01.035 05:55:52 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:01.035 05:55:52 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:09:01.035 ===================================================== 00:09:01.035 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:01.035 ===================================================== 00:09:01.035 Controller Capabilities/Features 00:09:01.035 ================================ 00:09:01.035 Vendor ID: 1b36 00:09:01.035 Subsystem Vendor ID: 1af4 00:09:01.035 Serial Number: 12343 00:09:01.035 Model Number: QEMU NVMe Ctrl 00:09:01.035 Firmware Version: 8.0.0 00:09:01.035 Recommended Arb Burst: 6 00:09:01.035 IEEE OUI Identifier: 00 54 52 00:09:01.035 Multi-path I/O 00:09:01.035 May have multiple subsystem ports: No 00:09:01.035 May have multiple controllers: Yes 00:09:01.035 Associated with SR-IOV VF: No 00:09:01.035 Max Data Transfer Size: 524288 00:09:01.035 Max Number of Namespaces: 256 00:09:01.035 Max Number of I/O Queues: 64 00:09:01.035 NVMe Specification Version (VS): 1.4 00:09:01.035 NVMe Specification Version (Identify): 1.4 00:09:01.035 Maximum Queue Entries: 2048 00:09:01.035 Contiguous Queues Required: Yes 00:09:01.035 Arbitration Mechanisms Supported 00:09:01.035 Weighted Round Robin: Not Supported 00:09:01.035 Vendor Specific: Not Supported 00:09:01.035 Reset Timeout: 7500 ms 00:09:01.035 Doorbell Stride: 4 bytes 00:09:01.035 NVM Subsystem Reset: Not Supported 00:09:01.035 Command Sets Supported 00:09:01.035 NVM Command Set: Supported 00:09:01.035 Boot Partition: Not Supported 00:09:01.035 Memory Page Size Minimum: 4096 bytes 00:09:01.035 Memory Page Size Maximum: 65536 bytes 00:09:01.035 Persistent Memory Region: Not Supported 00:09:01.035 Optional Asynchronous Events Supported 00:09:01.035 Namespace Attribute Notices: Supported 00:09:01.035 Firmware Activation Notices: Not Supported 00:09:01.035 ANA Change Notices: Not Supported 00:09:01.035 PLE Aggregate Log Change Notices: Not Supported 00:09:01.035 LBA Status Info Alert Notices: Not Supported 00:09:01.035 EGE Aggregate Log Change Notices: Not Supported 00:09:01.035 Normal NVM Subsystem Shutdown event: Not Supported 00:09:01.036 Zone Descriptor Change Notices: Not Supported 00:09:01.036 Discovery Log Change Notices: Not Supported 00:09:01.036 Controller Attributes 00:09:01.036 128-bit Host Identifier: Not Supported 00:09:01.036 Non-Operational Permissive Mode: Not Supported 00:09:01.036 NVM Sets: Not Supported 00:09:01.036 Read Recovery Levels: Not Supported 00:09:01.036 Endurance Groups: Supported 00:09:01.036 Predictable Latency Mode: Not Supported 00:09:01.036 Traffic Based Keep ALive: Not Supported 00:09:01.036 Namespace Granularity: Not Supported 00:09:01.036 SQ Associations: Not Supported 00:09:01.036 UUID List: Not Supported 00:09:01.036 Multi-Domain Subsystem: Not Supported 00:09:01.036 Fixed Capacity Management: Not Supported 00:09:01.036 Variable Capacity Management: Not Supported 00:09:01.036 Delete Endurance Group: Not Supported 00:09:01.036 Delete NVM Set: Not Supported 00:09:01.036 Extended LBA Formats Supported: Supported 00:09:01.036 Flexible Data Placement Supported: Supported 00:09:01.036 00:09:01.036 Controller Memory Buffer Support 00:09:01.036 ================================ 00:09:01.036 Supported: No 00:09:01.036 00:09:01.036 Persistent Memory Region Support 00:09:01.036 ================================ 00:09:01.036 Supported: No 00:09:01.036 00:09:01.036 Admin Command Set Attributes 00:09:01.036 ============================ 00:09:01.036 Security Send/Receive: Not Supported 00:09:01.036 Format NVM: Supported 00:09:01.036 Firmware Activate/Download: Not Supported 00:09:01.036 Namespace Management: Supported 00:09:01.036 Device Self-Test: Not Supported 00:09:01.036 Directives: Supported 00:09:01.036 NVMe-MI: Not Supported 00:09:01.036 Virtualization Management: Not Supported 00:09:01.036 Doorbell Buffer Config: Supported 00:09:01.036 Get LBA Status Capability: Not Supported 00:09:01.036 Command & Feature Lockdown Capability: Not Supported 00:09:01.036 Abort Command Limit: 4 00:09:01.036 Async Event Request Limit: 4 00:09:01.036 Number of Firmware Slots: N/A 00:09:01.036 Firmware Slot 1 Read-Only: N/A 00:09:01.036 Firmware Activation Without Reset: N/A 00:09:01.036 Multiple Update Detection Support: N/A 00:09:01.036 Firmware Update Granularity: No Information Provided 00:09:01.036 Per-Namespace SMART Log: Yes 00:09:01.036 Asymmetric Namespace Access Log Page: Not Supported 00:09:01.036 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:01.036 Command Effects Log Page: Supported 00:09:01.036 Get Log Page Extended Data: Supported 00:09:01.036 Telemetry Log Pages: Not Supported 00:09:01.036 Persistent Event Log Pages: Not Supported 00:09:01.036 Supported Log Pages Log Page: May Support 00:09:01.036 Commands Supported & Effects Log Page: Not Supported 00:09:01.036 Feature Identifiers & Effects Log Page:May Support 00:09:01.036 NVMe-MI Commands & Effects Log Page: May Support 00:09:01.036 Data Area 4 for Telemetry Log: Not Supported 00:09:01.036 Error Log Page Entries Supported: 1 00:09:01.036 Keep Alive: Not Supported 00:09:01.036 00:09:01.036 NVM Command Set Attributes 00:09:01.036 ========================== 00:09:01.036 Submission Queue Entry Size 00:09:01.036 Max: 64 00:09:01.036 Min: 64 00:09:01.036 Completion Queue Entry Size 00:09:01.036 Max: 16 00:09:01.036 Min: 16 00:09:01.036 Number of Namespaces: 256 00:09:01.036 Compare Command: Supported 00:09:01.036 Write Uncorrectable Command: Not Supported 00:09:01.036 Dataset Management Command: Supported 00:09:01.036 Write Zeroes Command: Supported 00:09:01.036 Set Features Save Field: Supported 00:09:01.036 Reservations: Not Supported 00:09:01.036 Timestamp: Supported 00:09:01.036 Copy: Supported 00:09:01.036 Volatile Write Cache: Present 00:09:01.036 Atomic Write Unit (Normal): 1 00:09:01.036 Atomic Write Unit (PFail): 1 00:09:01.036 Atomic Compare & Write Unit: 1 00:09:01.036 Fused Compare & Write: Not Supported 00:09:01.036 Scatter-Gather List 00:09:01.036 SGL Command Set: Supported 00:09:01.036 SGL Keyed: Not Supported 00:09:01.036 SGL Bit Bucket Descriptor: Not Supported 00:09:01.036 SGL Metadata Pointer: Not Supported 00:09:01.036 Oversized SGL: Not Supported 00:09:01.036 SGL Metadata Address: Not Supported 00:09:01.036 SGL Offset: Not Supported 00:09:01.036 Transport SGL Data Block: Not Supported 00:09:01.036 Replay Protected Memory Block: Not Supported 00:09:01.036 00:09:01.036 Firmware Slot Information 00:09:01.036 ========================= 00:09:01.036 Active slot: 1 00:09:01.036 Slot 1 Firmware Revision: 1.0 00:09:01.036 00:09:01.036 00:09:01.036 Commands Supported and Effects 00:09:01.036 ============================== 00:09:01.036 Admin Commands 00:09:01.036 -------------- 00:09:01.036 Delete I/O Submission Queue (00h): Supported 00:09:01.036 Create I/O Submission Queue (01h): Supported 00:09:01.036 Get Log Page (02h): Supported 00:09:01.036 Delete I/O Completion Queue (04h): Supported 00:09:01.036 Create I/O Completion Queue (05h): Supported 00:09:01.036 Identify (06h): Supported 00:09:01.036 Abort (08h): Supported 00:09:01.036 Set Features (09h): Supported 00:09:01.036 Get Features (0Ah): Supported 00:09:01.036 Asynchronous Event Request (0Ch): Supported 00:09:01.036 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:01.036 Directive Send (19h): Supported 00:09:01.036 Directive Receive (1Ah): Supported 00:09:01.036 Virtualization Management (1Ch): Supported 00:09:01.036 Doorbell Buffer Config (7Ch): Supported 00:09:01.036 Format NVM (80h): Supported LBA-Change 00:09:01.036 I/O Commands 00:09:01.036 ------------ 00:09:01.036 Flush (00h): Supported LBA-Change 00:09:01.036 Write (01h): Supported LBA-Change 00:09:01.036 Read (02h): Supported 00:09:01.036 Compare (05h): Supported 00:09:01.036 Write Zeroes (08h): Supported LBA-Change 00:09:01.036 Dataset Management (09h): Supported LBA-Change 00:09:01.036 Unknown (0Ch): Supported 00:09:01.036 Unknown (12h): Supported 00:09:01.036 Copy (19h): Supported LBA-Change 00:09:01.036 Unknown (1Dh): Supported LBA-Change 00:09:01.036 00:09:01.036 Error Log 00:09:01.036 ========= 00:09:01.036 00:09:01.036 Arbitration 00:09:01.036 =========== 00:09:01.036 Arbitration Burst: no limit 00:09:01.036 00:09:01.036 Power Management 00:09:01.036 ================ 00:09:01.036 Number of Power States: 1 00:09:01.036 Current Power State: Power State #0 00:09:01.036 Power State #0: 00:09:01.036 Max Power: 25.00 W 00:09:01.036 Non-Operational State: Operational 00:09:01.036 Entry Latency: 16 microseconds 00:09:01.036 Exit Latency: 4 microseconds 00:09:01.036 Relative Read Throughput: 0 00:09:01.037 Relative Read Latency: 0 00:09:01.037 Relative Write Throughput: 0 00:09:01.037 Relative Write Latency: 0 00:09:01.037 Idle Power: Not Reported 00:09:01.037 Active Power: Not Reported 00:09:01.037 Non-Operational Permissive Mode: Not Supported 00:09:01.037 00:09:01.037 Health Information 00:09:01.037 ================== 00:09:01.037 Critical Warnings: 00:09:01.037 Available Spare Space: OK 00:09:01.037 Temperature: OK 00:09:01.037 Device Reliability: OK 00:09:01.037 Read Only: No 00:09:01.037 Volatile Memory Backup: OK 00:09:01.037 Current Temperature: 323 Kelvin (50 Celsius) 00:09:01.037 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:01.037 Available Spare: 0% 00:09:01.037 Available Spare Threshold: 0% 00:09:01.037 Life Percentage Used: 0% 00:09:01.037 Data Units Read: 811 00:09:01.037 Data Units Written: 704 00:09:01.037 Host Read Commands: 34387 00:09:01.037 Host Write Commands: 32977 00:09:01.037 Controller Busy Time: 0 minutes 00:09:01.037 Power Cycles: 0 00:09:01.037 Power On Hours: 0 hours 00:09:01.037 Unsafe Shutdowns: 0 00:09:01.037 Unrecoverable Media Errors: 0 00:09:01.037 Lifetime Error Log Entries: 0 00:09:01.037 Warning Temperature Time: 0 minutes 00:09:01.037 Critical Temperature Time: 0 minutes 00:09:01.037 00:09:01.037 Number of Queues 00:09:01.037 ================ 00:09:01.037 Number of I/O Submission Queues: 64 00:09:01.037 Number of I/O Completion Queues: 64 00:09:01.037 00:09:01.037 ZNS Specific Controller Data 00:09:01.037 ============================ 00:09:01.037 Zone Append Size Limit: 0 00:09:01.037 00:09:01.037 00:09:01.037 Active Namespaces 00:09:01.037 ================= 00:09:01.037 Namespace ID:1 00:09:01.037 Error Recovery Timeout: Unlimited 00:09:01.037 Command Set Identifier: NVM (00h) 00:09:01.037 Deallocate: Supported 00:09:01.037 Deallocated/Unwritten Error: Supported 00:09:01.037 Deallocated Read Value: All 0x00 00:09:01.037 Deallocate in Write Zeroes: Not Supported 00:09:01.037 Deallocated Guard Field: 0xFFFF 00:09:01.037 Flush: Supported 00:09:01.037 Reservation: Not Supported 00:09:01.037 Namespace Sharing Capabilities: Multiple Controllers 00:09:01.037 Size (in LBAs): 262144 (1GiB) 00:09:01.037 Capacity (in LBAs): 262144 (1GiB) 00:09:01.037 Utilization (in LBAs): 262144 (1GiB) 00:09:01.037 Thin Provisioning: Not Supported 00:09:01.037 Per-NS Atomic Units: No 00:09:01.037 Maximum Single Source Range Length: 128 00:09:01.037 Maximum Copy Length: 128 00:09:01.037 Maximum Source Range Count: 128 00:09:01.037 NGUID/EUI64 Never Reused: No 00:09:01.037 Namespace Write Protected: No 00:09:01.037 Endurance group ID: 1 00:09:01.037 Number of LBA Formats: 8 00:09:01.037 Current LBA Format: LBA Format #04 00:09:01.037 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:01.037 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:01.037 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:01.037 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:01.037 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:01.037 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:01.037 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:01.037 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:01.037 00:09:01.037 Get Feature FDP: 00:09:01.037 ================ 00:09:01.037 Enabled: Yes 00:09:01.037 FDP configuration index: 0 00:09:01.037 00:09:01.037 FDP configurations log page 00:09:01.037 =========================== 00:09:01.037 Number of FDP configurations: 1 00:09:01.037 Version: 0 00:09:01.037 Size: 112 00:09:01.037 FDP Configuration Descriptor: 0 00:09:01.037 Descriptor Size: 96 00:09:01.037 Reclaim Group Identifier format: 2 00:09:01.037 FDP Volatile Write Cache: Not Present 00:09:01.037 FDP Configuration: Valid 00:09:01.037 Vendor Specific Size: 0 00:09:01.037 Number of Reclaim Groups: 2 00:09:01.037 Number of Recalim Unit Handles: 8 00:09:01.037 Max Placement Identifiers: 128 00:09:01.037 Number of Namespaces Suppprted: 256 00:09:01.037 Reclaim unit Nominal Size: 6000000 bytes 00:09:01.037 Estimated Reclaim Unit Time Limit: Not Reported 00:09:01.037 RUH Desc #000: RUH Type: Initially Isolated 00:09:01.037 RUH Desc #001: RUH Type: Initially Isolated 00:09:01.037 RUH Desc #002: RUH Type: Initially Isolated 00:09:01.037 RUH Desc #003: RUH Type: Initially Isolated 00:09:01.037 RUH Desc #004: RUH Type: Initially Isolated 00:09:01.037 RUH Desc #005: RUH Type: Initially Isolated 00:09:01.037 RUH Desc #006: RUH Type: Initially Isolated 00:09:01.037 RUH Desc #007: RUH Type: Initially Isolated 00:09:01.037 00:09:01.037 FDP reclaim unit handle usage log page 00:09:01.301 ====================================== 00:09:01.301 Number of Reclaim Unit Handles: 8 00:09:01.301 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:01.301 RUH Usage Desc #001: RUH Attributes: Unused 00:09:01.301 RUH Usage Desc #002: RUH Attributes: Unused 00:09:01.301 RUH Usage Desc #003: RUH Attributes: Unused 00:09:01.301 RUH Usage Desc #004: RUH Attributes: Unused 00:09:01.301 RUH Usage Desc #005: RUH Attributes: Unused 00:09:01.301 RUH Usage Desc #006: RUH Attributes: Unused 00:09:01.301 RUH Usage Desc #007: RUH Attributes: Unused 00:09:01.301 00:09:01.301 FDP statistics log page 00:09:01.301 ======================= 00:09:01.301 Host bytes with metadata written: 442146816 00:09:01.301 Media bytes with metadata written: 442200064 00:09:01.301 Media bytes erased: 0 00:09:01.301 00:09:01.301 FDP events log page 00:09:01.301 =================== 00:09:01.301 Number of FDP events: 0 00:09:01.301 00:09:01.301 NVM Specific Namespace Data 00:09:01.301 =========================== 00:09:01.301 Logical Block Storage Tag Mask: 0 00:09:01.301 Protection Information Capabilities: 00:09:01.301 16b Guard Protection Information Storage Tag Support: No 00:09:01.301 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:01.301 Storage Tag Check Read Support: No 00:09:01.301 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:01.301 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:01.301 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:01.301 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:01.301 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:01.301 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:01.301 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:01.301 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:01.301 ************************************ 00:09:01.301 END TEST nvme_identify 00:09:01.301 ************************************ 00:09:01.301 00:09:01.301 real 0m1.389s 00:09:01.301 user 0m0.554s 00:09:01.301 sys 0m0.618s 00:09:01.301 05:55:52 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.301 05:55:52 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:09:01.301 05:55:52 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:01.301 05:55:52 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:01.301 05:55:52 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:01.301 05:55:52 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.302 05:55:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:01.302 ************************************ 00:09:01.302 START TEST nvme_perf 00:09:01.302 ************************************ 00:09:01.302 05:55:52 nvme.nvme_perf -- common/autotest_common.sh@1123 -- # nvme_perf 00:09:01.302 05:55:52 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:02.679 Initializing NVMe Controllers 00:09:02.679 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:02.679 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:02.680 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:02.680 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:02.680 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:02.680 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:02.680 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:02.680 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:02.680 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:02.680 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:02.680 Initialization complete. Launching workers. 00:09:02.680 ======================================================== 00:09:02.680 Latency(us) 00:09:02.680 Device Information : IOPS MiB/s Average min max 00:09:02.680 PCIE (0000:00:10.0) NSID 1 from core 0: 13079.19 153.27 9786.90 6169.08 29225.58 00:09:02.680 PCIE (0000:00:11.0) NSID 1 from core 0: 13079.19 153.27 9775.74 5853.63 28442.46 00:09:02.680 PCIE (0000:00:13.0) NSID 1 from core 0: 13079.19 153.27 9762.31 4788.98 28102.11 00:09:02.680 PCIE (0000:00:12.0) NSID 1 from core 0: 13079.19 153.27 9748.32 4393.37 27263.65 00:09:02.680 PCIE (0000:00:12.0) NSID 2 from core 0: 13079.19 153.27 9734.25 3926.45 26413.74 00:09:02.680 PCIE (0000:00:12.0) NSID 3 from core 0: 13079.19 153.27 9719.82 3446.98 25616.14 00:09:02.680 ======================================================== 00:09:02.680 Total : 78475.16 919.63 9754.56 3446.98 29225.58 00:09:02.680 00:09:02.680 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:02.680 ================================================================================= 00:09:02.680 1.00000% : 8102.633us 00:09:02.680 10.00000% : 8638.836us 00:09:02.680 25.00000% : 9115.462us 00:09:02.680 50.00000% : 9592.087us 00:09:02.680 75.00000% : 10068.713us 00:09:02.680 90.00000% : 10545.338us 00:09:02.680 95.00000% : 11796.480us 00:09:02.680 98.00000% : 12868.887us 00:09:02.680 99.00000% : 14477.498us 00:09:02.680 99.50000% : 22043.927us 00:09:02.680 99.90000% : 28954.996us 00:09:02.680 99.99000% : 29193.309us 00:09:02.680 99.99900% : 29312.465us 00:09:02.680 99.99990% : 29312.465us 00:09:02.680 99.99999% : 29312.465us 00:09:02.680 00:09:02.680 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:02.680 ================================================================================= 00:09:02.680 1.00000% : 8162.211us 00:09:02.680 10.00000% : 8698.415us 00:09:02.680 25.00000% : 9175.040us 00:09:02.680 50.00000% : 9592.087us 00:09:02.680 75.00000% : 10009.135us 00:09:02.680 90.00000% : 10485.760us 00:09:02.680 95.00000% : 11796.480us 00:09:02.680 98.00000% : 12809.309us 00:09:02.680 99.00000% : 14000.873us 00:09:02.680 99.50000% : 21924.771us 00:09:02.680 99.90000% : 28120.902us 00:09:02.680 99.99000% : 28478.371us 00:09:02.680 99.99900% : 28478.371us 00:09:02.680 99.99990% : 28478.371us 00:09:02.680 99.99999% : 28478.371us 00:09:02.680 00:09:02.680 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:02.680 ================================================================================= 00:09:02.680 1.00000% : 8162.211us 00:09:02.680 10.00000% : 8638.836us 00:09:02.680 25.00000% : 9175.040us 00:09:02.680 50.00000% : 9592.087us 00:09:02.680 75.00000% : 10009.135us 00:09:02.680 90.00000% : 10485.760us 00:09:02.680 95.00000% : 11736.902us 00:09:02.680 98.00000% : 12868.887us 00:09:02.680 99.00000% : 13583.825us 00:09:02.680 99.50000% : 21448.145us 00:09:02.680 99.90000% : 27882.589us 00:09:02.680 99.99000% : 28120.902us 00:09:02.680 99.99900% : 28120.902us 00:09:02.680 99.99990% : 28120.902us 00:09:02.680 99.99999% : 28120.902us 00:09:02.680 00:09:02.680 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:02.680 ================================================================================= 00:09:02.680 1.00000% : 8162.211us 00:09:02.680 10.00000% : 8638.836us 00:09:02.680 25.00000% : 9175.040us 00:09:02.680 50.00000% : 9592.087us 00:09:02.680 75.00000% : 10009.135us 00:09:02.680 90.00000% : 10485.760us 00:09:02.680 95.00000% : 11677.324us 00:09:02.680 98.00000% : 12928.465us 00:09:02.680 99.00000% : 14120.029us 00:09:02.680 99.50000% : 20614.051us 00:09:02.680 99.90000% : 27048.495us 00:09:02.680 99.99000% : 27286.807us 00:09:02.680 99.99900% : 27286.807us 00:09:02.680 99.99990% : 27286.807us 00:09:02.680 99.99999% : 27286.807us 00:09:02.680 00:09:02.680 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:02.680 ================================================================================= 00:09:02.680 1.00000% : 8043.055us 00:09:02.680 10.00000% : 8638.836us 00:09:02.680 25.00000% : 9175.040us 00:09:02.680 50.00000% : 9592.087us 00:09:02.680 75.00000% : 10009.135us 00:09:02.680 90.00000% : 10485.760us 00:09:02.680 95.00000% : 11617.745us 00:09:02.680 98.00000% : 13107.200us 00:09:02.680 99.00000% : 14298.764us 00:09:02.680 99.50000% : 19899.113us 00:09:02.680 99.90000% : 26095.244us 00:09:02.680 99.99000% : 26452.713us 00:09:02.680 99.99900% : 26452.713us 00:09:02.680 99.99990% : 26452.713us 00:09:02.680 99.99999% : 26452.713us 00:09:02.680 00:09:02.680 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:02.680 ================================================================================= 00:09:02.680 1.00000% : 8043.055us 00:09:02.680 10.00000% : 8638.836us 00:09:02.680 25.00000% : 9175.040us 00:09:02.680 50.00000% : 9592.087us 00:09:02.680 75.00000% : 10009.135us 00:09:02.680 90.00000% : 10485.760us 00:09:02.680 95.00000% : 11677.324us 00:09:02.680 98.00000% : 12988.044us 00:09:02.680 99.00000% : 14298.764us 00:09:02.680 99.50000% : 18945.862us 00:09:02.680 99.90000% : 25380.305us 00:09:02.680 99.99000% : 25618.618us 00:09:02.680 99.99900% : 25618.618us 00:09:02.680 99.99990% : 25618.618us 00:09:02.680 99.99999% : 25618.618us 00:09:02.680 00:09:02.680 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:02.680 ============================================================================== 00:09:02.680 Range in us Cumulative IO count 00:09:02.680 6166.342 - 6196.131: 0.0152% ( 2) 00:09:02.680 6196.131 - 6225.920: 0.0305% ( 2) 00:09:02.680 6225.920 - 6255.709: 0.0534% ( 3) 00:09:02.680 6255.709 - 6285.498: 0.0686% ( 2) 00:09:02.680 6285.498 - 6315.287: 0.0915% ( 3) 00:09:02.680 6315.287 - 6345.076: 0.1067% ( 2) 00:09:02.680 6345.076 - 6374.865: 0.1143% ( 1) 00:09:02.680 6374.865 - 6404.655: 0.1372% ( 3) 00:09:02.680 6404.655 - 6434.444: 0.1601% ( 3) 00:09:02.680 6434.444 - 6464.233: 0.1753% ( 2) 00:09:02.680 6464.233 - 6494.022: 0.1829% ( 1) 00:09:02.680 6494.022 - 6523.811: 0.2058% ( 3) 00:09:02.680 6523.811 - 6553.600: 0.2210% ( 2) 00:09:02.680 6553.600 - 6583.389: 0.2363% ( 2) 00:09:02.680 6583.389 - 6613.178: 0.2515% ( 2) 00:09:02.680 6613.178 - 6642.967: 0.2668% ( 2) 00:09:02.680 6642.967 - 6672.756: 0.2820% ( 2) 00:09:02.680 6672.756 - 6702.545: 0.3049% ( 3) 00:09:02.680 6702.545 - 6732.335: 0.3201% ( 2) 00:09:02.680 6732.335 - 6762.124: 0.3430% ( 3) 00:09:02.680 6762.124 - 6791.913: 0.3582% ( 2) 00:09:02.680 6791.913 - 6821.702: 0.3735% ( 2) 00:09:02.680 6821.702 - 6851.491: 0.3887% ( 2) 00:09:02.680 6851.491 - 6881.280: 0.4116% ( 3) 00:09:02.680 6881.280 - 6911.069: 0.4268% ( 2) 00:09:02.680 6911.069 - 6940.858: 0.4421% ( 2) 00:09:02.680 6940.858 - 6970.647: 0.4497% ( 1) 00:09:02.680 6970.647 - 7000.436: 0.4726% ( 3) 00:09:02.680 7000.436 - 7030.225: 0.4878% ( 2) 00:09:02.680 7864.320 - 7923.898: 0.4954% ( 1) 00:09:02.680 7923.898 - 7983.476: 0.5716% ( 10) 00:09:02.680 7983.476 - 8043.055: 0.8079% ( 31) 00:09:02.680 8043.055 - 8102.633: 1.1738% ( 48) 00:09:02.680 8102.633 - 8162.211: 1.6768% ( 66) 00:09:02.680 8162.211 - 8221.789: 2.4390% ( 100) 00:09:02.680 8221.789 - 8281.367: 3.4299% ( 130) 00:09:02.680 8281.367 - 8340.945: 4.4360% ( 132) 00:09:02.680 8340.945 - 8400.524: 5.4726% ( 136) 00:09:02.680 8400.524 - 8460.102: 6.6159% ( 150) 00:09:02.680 8460.102 - 8519.680: 7.6829% ( 140) 00:09:02.680 8519.680 - 8579.258: 8.8720% ( 156) 00:09:02.680 8579.258 - 8638.836: 10.0838% ( 159) 00:09:02.680 8638.836 - 8698.415: 11.3796% ( 170) 00:09:02.680 8698.415 - 8757.993: 12.6372% ( 165) 00:09:02.680 8757.993 - 8817.571: 14.1616% ( 200) 00:09:02.680 8817.571 - 8877.149: 15.8079% ( 216) 00:09:02.680 8877.149 - 8936.727: 17.7134% ( 250) 00:09:02.680 8936.727 - 8996.305: 20.0534% ( 307) 00:09:02.680 8996.305 - 9055.884: 22.6905% ( 346) 00:09:02.680 9055.884 - 9115.462: 25.6250% ( 385) 00:09:02.680 9115.462 - 9175.040: 28.6966% ( 403) 00:09:02.680 9175.040 - 9234.618: 31.7149% ( 396) 00:09:02.680 9234.618 - 9294.196: 35.0229% ( 434) 00:09:02.680 9294.196 - 9353.775: 38.4604% ( 451) 00:09:02.680 9353.775 - 9413.353: 41.6845% ( 423) 00:09:02.680 9413.353 - 9472.931: 45.0229% ( 438) 00:09:02.680 9472.931 - 9532.509: 48.2698% ( 426) 00:09:02.680 9532.509 - 9592.087: 51.3262% ( 401) 00:09:02.680 9592.087 - 9651.665: 54.5427% ( 422) 00:09:02.680 9651.665 - 9711.244: 57.6524% ( 408) 00:09:02.680 9711.244 - 9770.822: 60.5640% ( 382) 00:09:02.680 9770.822 - 9830.400: 63.6433% ( 404) 00:09:02.680 9830.400 - 9889.978: 66.6616% ( 396) 00:09:02.680 9889.978 - 9949.556: 69.7409% ( 404) 00:09:02.680 9949.556 - 10009.135: 72.6143% ( 377) 00:09:02.680 10009.135 - 10068.713: 75.6860% ( 403) 00:09:02.680 10068.713 - 10128.291: 78.3460% ( 349) 00:09:02.680 10128.291 - 10187.869: 81.0671% ( 357) 00:09:02.680 10187.869 - 10247.447: 83.3384% ( 298) 00:09:02.680 10247.447 - 10307.025: 85.4192% ( 273) 00:09:02.680 10307.025 - 10366.604: 87.2104% ( 235) 00:09:02.680 10366.604 - 10426.182: 88.4299% ( 160) 00:09:02.680 10426.182 - 10485.760: 89.4131% ( 129) 00:09:02.680 10485.760 - 10545.338: 90.1601% ( 98) 00:09:02.680 10545.338 - 10604.916: 90.7088% ( 72) 00:09:02.680 10604.916 - 10664.495: 91.0595% ( 46) 00:09:02.680 10664.495 - 10724.073: 91.3110% ( 33) 00:09:02.680 10724.073 - 10783.651: 91.5625% ( 33) 00:09:02.680 10783.651 - 10843.229: 91.7759% ( 28) 00:09:02.680 10843.229 - 10902.807: 91.9817% ( 27) 00:09:02.680 10902.807 - 10962.385: 92.1951% ( 28) 00:09:02.680 10962.385 - 11021.964: 92.3704% ( 23) 00:09:02.680 11021.964 - 11081.542: 92.5686% ( 26) 00:09:02.681 11081.542 - 11141.120: 92.7744% ( 27) 00:09:02.681 11141.120 - 11200.698: 93.0030% ( 30) 00:09:02.681 11200.698 - 11260.276: 93.2241% ( 29) 00:09:02.681 11260.276 - 11319.855: 93.4680% ( 32) 00:09:02.681 11319.855 - 11379.433: 93.6509% ( 24) 00:09:02.681 11379.433 - 11439.011: 93.8643% ( 28) 00:09:02.681 11439.011 - 11498.589: 94.1235% ( 34) 00:09:02.681 11498.589 - 11558.167: 94.3064% ( 24) 00:09:02.681 11558.167 - 11617.745: 94.5427% ( 31) 00:09:02.681 11617.745 - 11677.324: 94.7256% ( 24) 00:09:02.681 11677.324 - 11736.902: 94.9390% ( 28) 00:09:02.681 11736.902 - 11796.480: 95.1372% ( 26) 00:09:02.681 11796.480 - 11856.058: 95.3735% ( 31) 00:09:02.681 11856.058 - 11915.636: 95.5869% ( 28) 00:09:02.681 11915.636 - 11975.215: 95.7927% ( 27) 00:09:02.681 11975.215 - 12034.793: 95.9985% ( 27) 00:09:02.681 12034.793 - 12094.371: 96.2271% ( 30) 00:09:02.681 12094.371 - 12153.949: 96.3872% ( 21) 00:09:02.681 12153.949 - 12213.527: 96.5549% ( 22) 00:09:02.681 12213.527 - 12273.105: 96.7073% ( 20) 00:09:02.681 12273.105 - 12332.684: 96.8674% ( 21) 00:09:02.681 12332.684 - 12392.262: 97.0351% ( 22) 00:09:02.681 12392.262 - 12451.840: 97.1875% ( 20) 00:09:02.681 12451.840 - 12511.418: 97.3323% ( 19) 00:09:02.681 12511.418 - 12570.996: 97.4924% ( 21) 00:09:02.681 12570.996 - 12630.575: 97.6067% ( 15) 00:09:02.681 12630.575 - 12690.153: 97.7439% ( 18) 00:09:02.681 12690.153 - 12749.731: 97.8430% ( 13) 00:09:02.681 12749.731 - 12809.309: 97.9497% ( 14) 00:09:02.681 12809.309 - 12868.887: 98.0335% ( 11) 00:09:02.681 12868.887 - 12928.465: 98.1631% ( 17) 00:09:02.681 12928.465 - 12988.044: 98.2241% ( 8) 00:09:02.681 12988.044 - 13047.622: 98.3003% ( 10) 00:09:02.681 13047.622 - 13107.200: 98.3308% ( 4) 00:09:02.681 13107.200 - 13166.778: 98.3841% ( 7) 00:09:02.681 13166.778 - 13226.356: 98.4146% ( 4) 00:09:02.681 13226.356 - 13285.935: 98.4527% ( 5) 00:09:02.681 13285.935 - 13345.513: 98.4909% ( 5) 00:09:02.681 13345.513 - 13405.091: 98.5290% ( 5) 00:09:02.681 13405.091 - 13464.669: 98.5671% ( 5) 00:09:02.681 13464.669 - 13524.247: 98.5899% ( 3) 00:09:02.681 13524.247 - 13583.825: 98.6280% ( 5) 00:09:02.681 13583.825 - 13643.404: 98.6738% ( 6) 00:09:02.681 13643.404 - 13702.982: 98.7119% ( 5) 00:09:02.681 13702.982 - 13762.560: 98.7500% ( 5) 00:09:02.681 13762.560 - 13822.138: 98.7881% ( 5) 00:09:02.681 13822.138 - 13881.716: 98.8034% ( 2) 00:09:02.681 13881.716 - 13941.295: 98.8262% ( 3) 00:09:02.681 13941.295 - 14000.873: 98.8491% ( 3) 00:09:02.681 14000.873 - 14060.451: 98.8720% ( 3) 00:09:02.681 14060.451 - 14120.029: 98.8872% ( 2) 00:09:02.681 14120.029 - 14179.607: 98.9177% ( 4) 00:09:02.681 14179.607 - 14239.185: 98.9329% ( 2) 00:09:02.681 14239.185 - 14298.764: 98.9558% ( 3) 00:09:02.681 14298.764 - 14358.342: 98.9787% ( 3) 00:09:02.681 14358.342 - 14417.920: 98.9939% ( 2) 00:09:02.681 14417.920 - 14477.498: 99.0244% ( 4) 00:09:02.681 20256.582 - 20375.738: 99.0396% ( 2) 00:09:02.681 20375.738 - 20494.895: 99.0777% ( 5) 00:09:02.681 20494.895 - 20614.051: 99.1082% ( 4) 00:09:02.681 20614.051 - 20733.207: 99.1463% ( 5) 00:09:02.681 20733.207 - 20852.364: 99.1768% ( 4) 00:09:02.681 20852.364 - 20971.520: 99.2149% ( 5) 00:09:02.681 20971.520 - 21090.676: 99.2530% ( 5) 00:09:02.681 21090.676 - 21209.833: 99.2912% ( 5) 00:09:02.681 21209.833 - 21328.989: 99.3140% ( 3) 00:09:02.681 21328.989 - 21448.145: 99.3521% ( 5) 00:09:02.681 21448.145 - 21567.302: 99.3902% ( 5) 00:09:02.681 21567.302 - 21686.458: 99.4284% ( 5) 00:09:02.681 21686.458 - 21805.615: 99.4665% ( 5) 00:09:02.681 21805.615 - 21924.771: 99.4970% ( 4) 00:09:02.681 21924.771 - 22043.927: 99.5122% ( 2) 00:09:02.681 27286.807 - 27405.964: 99.5198% ( 1) 00:09:02.681 27405.964 - 27525.120: 99.5503% ( 4) 00:09:02.681 27525.120 - 27644.276: 99.5808% ( 4) 00:09:02.681 27644.276 - 27763.433: 99.6113% ( 4) 00:09:02.681 27763.433 - 27882.589: 99.6494% ( 5) 00:09:02.681 27882.589 - 28001.745: 99.6646% ( 2) 00:09:02.681 28001.745 - 28120.902: 99.6875% ( 3) 00:09:02.681 28120.902 - 28240.058: 99.7256% ( 5) 00:09:02.681 28240.058 - 28359.215: 99.7713% ( 6) 00:09:02.681 28359.215 - 28478.371: 99.7942% ( 3) 00:09:02.681 28478.371 - 28597.527: 99.8323% ( 5) 00:09:02.681 28597.527 - 28716.684: 99.8628% ( 4) 00:09:02.681 28716.684 - 28835.840: 99.8933% ( 4) 00:09:02.681 28835.840 - 28954.996: 99.9314% ( 5) 00:09:02.681 28954.996 - 29074.153: 99.9543% ( 3) 00:09:02.681 29074.153 - 29193.309: 99.9924% ( 5) 00:09:02.681 29193.309 - 29312.465: 100.0000% ( 1) 00:09:02.681 00:09:02.681 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:02.681 ============================================================================== 00:09:02.681 Range in us Cumulative IO count 00:09:02.681 5838.662 - 5868.451: 0.0152% ( 2) 00:09:02.681 5868.451 - 5898.240: 0.0381% ( 3) 00:09:02.681 5898.240 - 5928.029: 0.0534% ( 2) 00:09:02.681 5928.029 - 5957.818: 0.0762% ( 3) 00:09:02.681 5957.818 - 5987.607: 0.0991% ( 3) 00:09:02.681 5987.607 - 6017.396: 0.1220% ( 3) 00:09:02.681 6017.396 - 6047.185: 0.1372% ( 2) 00:09:02.681 6047.185 - 6076.975: 0.1601% ( 3) 00:09:02.681 6076.975 - 6106.764: 0.1829% ( 3) 00:09:02.681 6106.764 - 6136.553: 0.1982% ( 2) 00:09:02.681 6136.553 - 6166.342: 0.2210% ( 3) 00:09:02.681 6166.342 - 6196.131: 0.2439% ( 3) 00:09:02.681 6196.131 - 6225.920: 0.2668% ( 3) 00:09:02.681 6225.920 - 6255.709: 0.2896% ( 3) 00:09:02.681 6255.709 - 6285.498: 0.3125% ( 3) 00:09:02.681 6285.498 - 6315.287: 0.3277% ( 2) 00:09:02.681 6315.287 - 6345.076: 0.3506% ( 3) 00:09:02.681 6345.076 - 6374.865: 0.3735% ( 3) 00:09:02.681 6374.865 - 6404.655: 0.3963% ( 3) 00:09:02.681 6404.655 - 6434.444: 0.4116% ( 2) 00:09:02.681 6434.444 - 6464.233: 0.4345% ( 3) 00:09:02.681 6464.233 - 6494.022: 0.4573% ( 3) 00:09:02.681 6494.022 - 6523.811: 0.4726% ( 2) 00:09:02.681 6523.811 - 6553.600: 0.4878% ( 2) 00:09:02.681 7923.898 - 7983.476: 0.5030% ( 2) 00:09:02.681 7983.476 - 8043.055: 0.5564% ( 7) 00:09:02.681 8043.055 - 8102.633: 0.7393% ( 24) 00:09:02.681 8102.633 - 8162.211: 1.1128% ( 49) 00:09:02.681 8162.211 - 8221.789: 1.6311% ( 68) 00:09:02.681 8221.789 - 8281.367: 2.3780% ( 98) 00:09:02.681 8281.367 - 8340.945: 3.4070% ( 135) 00:09:02.681 8340.945 - 8400.524: 4.4665% ( 139) 00:09:02.681 8400.524 - 8460.102: 5.6326% ( 153) 00:09:02.681 8460.102 - 8519.680: 6.9512% ( 173) 00:09:02.681 8519.680 - 8579.258: 8.2241% ( 167) 00:09:02.681 8579.258 - 8638.836: 9.5655% ( 176) 00:09:02.681 8638.836 - 8698.415: 10.8841% ( 173) 00:09:02.681 8698.415 - 8757.993: 12.2866% ( 184) 00:09:02.681 8757.993 - 8817.571: 13.8415% ( 204) 00:09:02.681 8817.571 - 8877.149: 15.4345% ( 209) 00:09:02.681 8877.149 - 8936.727: 17.1951% ( 231) 00:09:02.681 8936.727 - 8996.305: 19.2302% ( 267) 00:09:02.681 8996.305 - 9055.884: 21.4710% ( 294) 00:09:02.681 9055.884 - 9115.462: 23.9482% ( 325) 00:09:02.681 9115.462 - 9175.040: 26.6540% ( 355) 00:09:02.681 9175.040 - 9234.618: 29.6189% ( 389) 00:09:02.681 9234.618 - 9294.196: 32.7134% ( 406) 00:09:02.681 9294.196 - 9353.775: 36.0671% ( 440) 00:09:02.681 9353.775 - 9413.353: 39.6951% ( 476) 00:09:02.681 9413.353 - 9472.931: 43.2165% ( 462) 00:09:02.681 9472.931 - 9532.509: 46.8445% ( 476) 00:09:02.681 9532.509 - 9592.087: 50.5183% ( 482) 00:09:02.681 9592.087 - 9651.665: 54.0701% ( 466) 00:09:02.681 9651.665 - 9711.244: 57.8049% ( 490) 00:09:02.681 9711.244 - 9770.822: 61.4101% ( 473) 00:09:02.681 9770.822 - 9830.400: 65.0305% ( 475) 00:09:02.681 9830.400 - 9889.978: 68.4985% ( 455) 00:09:02.681 9889.978 - 9949.556: 71.8979% ( 446) 00:09:02.681 9949.556 - 10009.135: 75.3049% ( 447) 00:09:02.681 10009.135 - 10068.713: 78.5366% ( 424) 00:09:02.681 10068.713 - 10128.291: 81.3643% ( 371) 00:09:02.681 10128.291 - 10187.869: 83.8948% ( 332) 00:09:02.681 10187.869 - 10247.447: 85.9070% ( 264) 00:09:02.681 10247.447 - 10307.025: 87.5152% ( 211) 00:09:02.681 10307.025 - 10366.604: 88.7652% ( 164) 00:09:02.681 10366.604 - 10426.182: 89.7104% ( 124) 00:09:02.681 10426.182 - 10485.760: 90.3887% ( 89) 00:09:02.681 10485.760 - 10545.338: 90.8765% ( 64) 00:09:02.681 10545.338 - 10604.916: 91.2805% ( 53) 00:09:02.681 10604.916 - 10664.495: 91.5549% ( 36) 00:09:02.681 10664.495 - 10724.073: 91.7988% ( 32) 00:09:02.681 10724.073 - 10783.651: 92.0351% ( 31) 00:09:02.681 10783.651 - 10843.229: 92.2561% ( 29) 00:09:02.681 10843.229 - 10902.807: 92.4924% ( 31) 00:09:02.681 10902.807 - 10962.385: 92.6677% ( 23) 00:09:02.681 10962.385 - 11021.964: 92.8582% ( 25) 00:09:02.681 11021.964 - 11081.542: 93.0259% ( 22) 00:09:02.681 11081.542 - 11141.120: 93.2012% ( 23) 00:09:02.681 11141.120 - 11200.698: 93.3841% ( 24) 00:09:02.681 11200.698 - 11260.276: 93.5366% ( 20) 00:09:02.681 11260.276 - 11319.855: 93.6890% ( 20) 00:09:02.681 11319.855 - 11379.433: 93.8262% ( 18) 00:09:02.681 11379.433 - 11439.011: 93.9863% ( 21) 00:09:02.681 11439.011 - 11498.589: 94.1235% ( 18) 00:09:02.681 11498.589 - 11558.167: 94.2988% ( 23) 00:09:02.681 11558.167 - 11617.745: 94.4512% ( 20) 00:09:02.681 11617.745 - 11677.324: 94.6418% ( 25) 00:09:02.681 11677.324 - 11736.902: 94.8171% ( 23) 00:09:02.681 11736.902 - 11796.480: 95.0381% ( 29) 00:09:02.681 11796.480 - 11856.058: 95.2744% ( 31) 00:09:02.681 11856.058 - 11915.636: 95.4954% ( 29) 00:09:02.681 11915.636 - 11975.215: 95.7470% ( 33) 00:09:02.681 11975.215 - 12034.793: 95.9756% ( 30) 00:09:02.681 12034.793 - 12094.371: 96.2119% ( 31) 00:09:02.681 12094.371 - 12153.949: 96.4482% ( 31) 00:09:02.681 12153.949 - 12213.527: 96.6692% ( 29) 00:09:02.681 12213.527 - 12273.105: 96.9131% ( 32) 00:09:02.681 12273.105 - 12332.684: 97.1037% ( 25) 00:09:02.681 12332.684 - 12392.262: 97.2790% ( 23) 00:09:02.681 12392.262 - 12451.840: 97.4314% ( 20) 00:09:02.681 12451.840 - 12511.418: 97.5686% ( 18) 00:09:02.681 12511.418 - 12570.996: 97.6677% ( 13) 00:09:02.681 12570.996 - 12630.575: 97.7820% ( 15) 00:09:02.681 12630.575 - 12690.153: 97.8887% ( 14) 00:09:02.681 12690.153 - 12749.731: 97.9726% ( 11) 00:09:02.681 12749.731 - 12809.309: 98.0869% ( 15) 00:09:02.682 12809.309 - 12868.887: 98.1555% ( 9) 00:09:02.682 12868.887 - 12928.465: 98.2546% ( 13) 00:09:02.682 12928.465 - 12988.044: 98.3232% ( 9) 00:09:02.682 12988.044 - 13047.622: 98.4146% ( 12) 00:09:02.682 13047.622 - 13107.200: 98.5061% ( 12) 00:09:02.682 13107.200 - 13166.778: 98.5671% ( 8) 00:09:02.682 13166.778 - 13226.356: 98.6204% ( 7) 00:09:02.682 13226.356 - 13285.935: 98.6662% ( 6) 00:09:02.682 13285.935 - 13345.513: 98.7195% ( 7) 00:09:02.682 13345.513 - 13405.091: 98.7576% ( 5) 00:09:02.682 13405.091 - 13464.669: 98.7881% ( 4) 00:09:02.682 13464.669 - 13524.247: 98.8110% ( 3) 00:09:02.682 13524.247 - 13583.825: 98.8415% ( 4) 00:09:02.682 13583.825 - 13643.404: 98.8643% ( 3) 00:09:02.682 13643.404 - 13702.982: 98.8872% ( 3) 00:09:02.682 13702.982 - 13762.560: 98.9177% ( 4) 00:09:02.682 13762.560 - 13822.138: 98.9405% ( 3) 00:09:02.682 13822.138 - 13881.716: 98.9634% ( 3) 00:09:02.682 13881.716 - 13941.295: 98.9863% ( 3) 00:09:02.682 13941.295 - 14000.873: 99.0091% ( 3) 00:09:02.682 14000.873 - 14060.451: 99.0244% ( 2) 00:09:02.682 20137.425 - 20256.582: 99.0473% ( 3) 00:09:02.682 20256.582 - 20375.738: 99.0701% ( 3) 00:09:02.682 20375.738 - 20494.895: 99.1082% ( 5) 00:09:02.682 20494.895 - 20614.051: 99.1387% ( 4) 00:09:02.682 20614.051 - 20733.207: 99.1768% ( 5) 00:09:02.682 20733.207 - 20852.364: 99.2149% ( 5) 00:09:02.682 20852.364 - 20971.520: 99.2530% ( 5) 00:09:02.682 20971.520 - 21090.676: 99.2835% ( 4) 00:09:02.682 21090.676 - 21209.833: 99.3140% ( 4) 00:09:02.682 21209.833 - 21328.989: 99.3521% ( 5) 00:09:02.682 21328.989 - 21448.145: 99.3902% ( 5) 00:09:02.682 21448.145 - 21567.302: 99.4284% ( 5) 00:09:02.682 21567.302 - 21686.458: 99.4588% ( 4) 00:09:02.682 21686.458 - 21805.615: 99.4970% ( 5) 00:09:02.682 21805.615 - 21924.771: 99.5122% ( 2) 00:09:02.682 26810.182 - 26929.338: 99.5351% ( 3) 00:09:02.682 26929.338 - 27048.495: 99.5732% ( 5) 00:09:02.682 27048.495 - 27167.651: 99.6037% ( 4) 00:09:02.682 27167.651 - 27286.807: 99.6418% ( 5) 00:09:02.682 27286.807 - 27405.964: 99.6799% ( 5) 00:09:02.682 27405.964 - 27525.120: 99.7104% ( 4) 00:09:02.682 27525.120 - 27644.276: 99.7485% ( 5) 00:09:02.682 27644.276 - 27763.433: 99.7866% ( 5) 00:09:02.682 27763.433 - 27882.589: 99.8247% ( 5) 00:09:02.682 27882.589 - 28001.745: 99.8628% ( 5) 00:09:02.682 28001.745 - 28120.902: 99.9009% ( 5) 00:09:02.682 28120.902 - 28240.058: 99.9314% ( 4) 00:09:02.682 28240.058 - 28359.215: 99.9695% ( 5) 00:09:02.682 28359.215 - 28478.371: 100.0000% ( 4) 00:09:02.682 00:09:02.682 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:02.682 ============================================================================== 00:09:02.682 Range in us Cumulative IO count 00:09:02.682 4766.255 - 4796.044: 0.0076% ( 1) 00:09:02.682 4796.044 - 4825.833: 0.0229% ( 2) 00:09:02.682 4825.833 - 4855.622: 0.0457% ( 3) 00:09:02.682 4855.622 - 4885.411: 0.0686% ( 3) 00:09:02.682 4885.411 - 4915.200: 0.0838% ( 2) 00:09:02.682 4915.200 - 4944.989: 0.1067% ( 3) 00:09:02.682 4944.989 - 4974.778: 0.1296% ( 3) 00:09:02.682 4974.778 - 5004.567: 0.1448% ( 2) 00:09:02.682 5004.567 - 5034.356: 0.1677% ( 3) 00:09:02.682 5034.356 - 5064.145: 0.1905% ( 3) 00:09:02.682 5064.145 - 5093.935: 0.2058% ( 2) 00:09:02.682 5093.935 - 5123.724: 0.2134% ( 1) 00:09:02.682 5123.724 - 5153.513: 0.2363% ( 3) 00:09:02.682 5153.513 - 5183.302: 0.2591% ( 3) 00:09:02.682 5183.302 - 5213.091: 0.2744% ( 2) 00:09:02.682 5213.091 - 5242.880: 0.2973% ( 3) 00:09:02.682 5242.880 - 5272.669: 0.3201% ( 3) 00:09:02.682 5272.669 - 5302.458: 0.3430% ( 3) 00:09:02.682 5302.458 - 5332.247: 0.3659% ( 3) 00:09:02.682 5332.247 - 5362.036: 0.3811% ( 2) 00:09:02.682 5362.036 - 5391.825: 0.3963% ( 2) 00:09:02.682 5391.825 - 5421.615: 0.4116% ( 2) 00:09:02.682 5421.615 - 5451.404: 0.4345% ( 3) 00:09:02.682 5451.404 - 5481.193: 0.4497% ( 2) 00:09:02.682 5481.193 - 5510.982: 0.4649% ( 2) 00:09:02.682 5510.982 - 5540.771: 0.4802% ( 2) 00:09:02.682 5540.771 - 5570.560: 0.4878% ( 1) 00:09:02.682 7923.898 - 7983.476: 0.5183% ( 4) 00:09:02.682 7983.476 - 8043.055: 0.5640% ( 6) 00:09:02.682 8043.055 - 8102.633: 0.7470% ( 24) 00:09:02.682 8102.633 - 8162.211: 1.2043% ( 60) 00:09:02.682 8162.211 - 8221.789: 1.8445% ( 84) 00:09:02.682 8221.789 - 8281.367: 2.6753% ( 109) 00:09:02.682 8281.367 - 8340.945: 3.7043% ( 135) 00:09:02.682 8340.945 - 8400.524: 4.7713% ( 140) 00:09:02.682 8400.524 - 8460.102: 5.9985% ( 161) 00:09:02.682 8460.102 - 8519.680: 7.3095% ( 172) 00:09:02.682 8519.680 - 8579.258: 8.6814% ( 180) 00:09:02.682 8579.258 - 8638.836: 10.0686% ( 182) 00:09:02.682 8638.836 - 8698.415: 11.4177% ( 177) 00:09:02.682 8698.415 - 8757.993: 12.8354% ( 186) 00:09:02.682 8757.993 - 8817.571: 14.3369% ( 197) 00:09:02.682 8817.571 - 8877.149: 15.8994% ( 205) 00:09:02.682 8877.149 - 8936.727: 17.7744% ( 246) 00:09:02.682 8936.727 - 8996.305: 19.7713% ( 262) 00:09:02.682 8996.305 - 9055.884: 22.0655% ( 301) 00:09:02.682 9055.884 - 9115.462: 24.4055% ( 307) 00:09:02.682 9115.462 - 9175.040: 27.0503% ( 347) 00:09:02.682 9175.040 - 9234.618: 29.7866% ( 359) 00:09:02.682 9234.618 - 9294.196: 32.9954% ( 421) 00:09:02.682 9294.196 - 9353.775: 36.2881% ( 432) 00:09:02.682 9353.775 - 9413.353: 39.6723% ( 444) 00:09:02.682 9413.353 - 9472.931: 43.1784% ( 460) 00:09:02.682 9472.931 - 9532.509: 46.7073% ( 463) 00:09:02.682 9532.509 - 9592.087: 50.3354% ( 476) 00:09:02.682 9592.087 - 9651.665: 53.9177% ( 470) 00:09:02.682 9651.665 - 9711.244: 57.4848% ( 468) 00:09:02.682 9711.244 - 9770.822: 61.1280% ( 478) 00:09:02.682 9770.822 - 9830.400: 64.7104% ( 470) 00:09:02.682 9830.400 - 9889.978: 68.2774% ( 468) 00:09:02.682 9889.978 - 9949.556: 71.6997% ( 449) 00:09:02.682 9949.556 - 10009.135: 75.0457% ( 439) 00:09:02.682 10009.135 - 10068.713: 78.1860% ( 412) 00:09:02.682 10068.713 - 10128.291: 81.0976% ( 382) 00:09:02.682 10128.291 - 10187.869: 83.7271% ( 345) 00:09:02.682 10187.869 - 10247.447: 85.6784% ( 256) 00:09:02.682 10247.447 - 10307.025: 87.3704% ( 222) 00:09:02.682 10307.025 - 10366.604: 88.6966% ( 174) 00:09:02.682 10366.604 - 10426.182: 89.7256% ( 135) 00:09:02.682 10426.182 - 10485.760: 90.5412% ( 107) 00:09:02.682 10485.760 - 10545.338: 91.0976% ( 73) 00:09:02.682 10545.338 - 10604.916: 91.5091% ( 54) 00:09:02.682 10604.916 - 10664.495: 91.7988% ( 38) 00:09:02.682 10664.495 - 10724.073: 92.0351% ( 31) 00:09:02.682 10724.073 - 10783.651: 92.2561% ( 29) 00:09:02.682 10783.651 - 10843.229: 92.5000% ( 32) 00:09:02.682 10843.229 - 10902.807: 92.6982% ( 26) 00:09:02.682 10902.807 - 10962.385: 92.9345% ( 31) 00:09:02.682 10962.385 - 11021.964: 93.1707% ( 31) 00:09:02.682 11021.964 - 11081.542: 93.3613% ( 25) 00:09:02.682 11081.542 - 11141.120: 93.5290% ( 22) 00:09:02.682 11141.120 - 11200.698: 93.6966% ( 22) 00:09:02.682 11200.698 - 11260.276: 93.8720% ( 23) 00:09:02.682 11260.276 - 11319.855: 94.0320% ( 21) 00:09:02.682 11319.855 - 11379.433: 94.1692% ( 18) 00:09:02.682 11379.433 - 11439.011: 94.3369% ( 22) 00:09:02.682 11439.011 - 11498.589: 94.5046% ( 22) 00:09:02.682 11498.589 - 11558.167: 94.6341% ( 17) 00:09:02.682 11558.167 - 11617.745: 94.7942% ( 21) 00:09:02.682 11617.745 - 11677.324: 94.9238% ( 17) 00:09:02.682 11677.324 - 11736.902: 95.0762% ( 20) 00:09:02.682 11736.902 - 11796.480: 95.2134% ( 18) 00:09:02.682 11796.480 - 11856.058: 95.3582% ( 19) 00:09:02.682 11856.058 - 11915.636: 95.5259% ( 22) 00:09:02.682 11915.636 - 11975.215: 95.6860% ( 21) 00:09:02.682 11975.215 - 12034.793: 95.8384% ( 20) 00:09:02.682 12034.793 - 12094.371: 95.9909% ( 20) 00:09:02.682 12094.371 - 12153.949: 96.1280% ( 18) 00:09:02.682 12153.949 - 12213.527: 96.3186% ( 25) 00:09:02.682 12213.527 - 12273.105: 96.4558% ( 18) 00:09:02.682 12273.105 - 12332.684: 96.6159% ( 21) 00:09:02.682 12332.684 - 12392.262: 96.7683% ( 20) 00:09:02.682 12392.262 - 12451.840: 96.9055% ( 18) 00:09:02.682 12451.840 - 12511.418: 97.0732% ( 22) 00:09:02.682 12511.418 - 12570.996: 97.2409% ( 22) 00:09:02.682 12570.996 - 12630.575: 97.4314% ( 25) 00:09:02.682 12630.575 - 12690.153: 97.6220% ( 25) 00:09:02.682 12690.153 - 12749.731: 97.7668% ( 19) 00:09:02.682 12749.731 - 12809.309: 97.9192% ( 20) 00:09:02.682 12809.309 - 12868.887: 98.0640% ( 19) 00:09:02.682 12868.887 - 12928.465: 98.1860% ( 16) 00:09:02.682 12928.465 - 12988.044: 98.3232% ( 18) 00:09:02.682 12988.044 - 13047.622: 98.4299% ( 14) 00:09:02.682 13047.622 - 13107.200: 98.5213% ( 12) 00:09:02.682 13107.200 - 13166.778: 98.5899% ( 9) 00:09:02.682 13166.778 - 13226.356: 98.6738% ( 11) 00:09:02.682 13226.356 - 13285.935: 98.7271% ( 7) 00:09:02.682 13285.935 - 13345.513: 98.8110% ( 11) 00:09:02.682 13345.513 - 13405.091: 98.8720% ( 8) 00:09:02.682 13405.091 - 13464.669: 98.9177% ( 6) 00:09:02.682 13464.669 - 13524.247: 98.9558% ( 5) 00:09:02.682 13524.247 - 13583.825: 99.0091% ( 7) 00:09:02.682 13583.825 - 13643.404: 99.0244% ( 2) 00:09:02.682 19660.800 - 19779.956: 99.0320% ( 1) 00:09:02.682 19779.956 - 19899.113: 99.0625% ( 4) 00:09:02.682 19899.113 - 20018.269: 99.0930% ( 4) 00:09:02.682 20018.269 - 20137.425: 99.1311% ( 5) 00:09:02.682 20137.425 - 20256.582: 99.1692% ( 5) 00:09:02.682 20256.582 - 20375.738: 99.1997% ( 4) 00:09:02.682 20375.738 - 20494.895: 99.2302% ( 4) 00:09:02.682 20494.895 - 20614.051: 99.2683% ( 5) 00:09:02.682 20614.051 - 20733.207: 99.2988% ( 4) 00:09:02.682 20733.207 - 20852.364: 99.3369% ( 5) 00:09:02.682 20852.364 - 20971.520: 99.3750% ( 5) 00:09:02.682 20971.520 - 21090.676: 99.4131% ( 5) 00:09:02.682 21090.676 - 21209.833: 99.4436% ( 4) 00:09:02.682 21209.833 - 21328.989: 99.4817% ( 5) 00:09:02.682 21328.989 - 21448.145: 99.5122% ( 4) 00:09:02.682 26452.713 - 26571.869: 99.5427% ( 4) 00:09:02.682 26571.869 - 26691.025: 99.5808% ( 5) 00:09:02.682 26691.025 - 26810.182: 99.6113% ( 4) 00:09:02.682 26810.182 - 26929.338: 99.6494% ( 5) 00:09:02.682 26929.338 - 27048.495: 99.6799% ( 4) 00:09:02.682 27048.495 - 27167.651: 99.7180% ( 5) 00:09:02.683 27167.651 - 27286.807: 99.7561% ( 5) 00:09:02.683 27286.807 - 27405.964: 99.7866% ( 4) 00:09:02.683 27405.964 - 27525.120: 99.8247% ( 5) 00:09:02.683 27525.120 - 27644.276: 99.8628% ( 5) 00:09:02.683 27644.276 - 27763.433: 99.8933% ( 4) 00:09:02.683 27763.433 - 27882.589: 99.9238% ( 4) 00:09:02.683 27882.589 - 28001.745: 99.9619% ( 5) 00:09:02.683 28001.745 - 28120.902: 100.0000% ( 5) 00:09:02.683 00:09:02.683 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:02.683 ============================================================================== 00:09:02.683 Range in us Cumulative IO count 00:09:02.683 4378.996 - 4408.785: 0.0152% ( 2) 00:09:02.683 4408.785 - 4438.575: 0.0381% ( 3) 00:09:02.683 4438.575 - 4468.364: 0.0534% ( 2) 00:09:02.683 4468.364 - 4498.153: 0.0762% ( 3) 00:09:02.683 4498.153 - 4527.942: 0.0915% ( 2) 00:09:02.683 4527.942 - 4557.731: 0.1067% ( 2) 00:09:02.683 4557.731 - 4587.520: 0.1296% ( 3) 00:09:02.683 4587.520 - 4617.309: 0.1524% ( 3) 00:09:02.683 4617.309 - 4647.098: 0.1753% ( 3) 00:09:02.683 4647.098 - 4676.887: 0.1905% ( 2) 00:09:02.683 4676.887 - 4706.676: 0.2134% ( 3) 00:09:02.683 4706.676 - 4736.465: 0.2363% ( 3) 00:09:02.683 4736.465 - 4766.255: 0.2591% ( 3) 00:09:02.683 4766.255 - 4796.044: 0.2744% ( 2) 00:09:02.683 4796.044 - 4825.833: 0.2973% ( 3) 00:09:02.683 4825.833 - 4855.622: 0.3201% ( 3) 00:09:02.683 4855.622 - 4885.411: 0.3354% ( 2) 00:09:02.683 4885.411 - 4915.200: 0.3582% ( 3) 00:09:02.683 4915.200 - 4944.989: 0.3735% ( 2) 00:09:02.683 4944.989 - 4974.778: 0.3963% ( 3) 00:09:02.683 4974.778 - 5004.567: 0.4116% ( 2) 00:09:02.683 5004.567 - 5034.356: 0.4345% ( 3) 00:09:02.683 5034.356 - 5064.145: 0.4573% ( 3) 00:09:02.683 5064.145 - 5093.935: 0.4802% ( 3) 00:09:02.683 5093.935 - 5123.724: 0.4878% ( 1) 00:09:02.683 7596.218 - 7626.007: 0.4954% ( 1) 00:09:02.683 7626.007 - 7685.585: 0.5259% ( 4) 00:09:02.683 7685.585 - 7745.164: 0.5640% ( 5) 00:09:02.683 7745.164 - 7804.742: 0.5945% ( 4) 00:09:02.683 7804.742 - 7864.320: 0.6402% ( 6) 00:09:02.683 7864.320 - 7923.898: 0.6784% ( 5) 00:09:02.683 7923.898 - 7983.476: 0.7165% ( 5) 00:09:02.683 7983.476 - 8043.055: 0.8155% ( 13) 00:09:02.683 8043.055 - 8102.633: 0.9909% ( 23) 00:09:02.683 8102.633 - 8162.211: 1.4253% ( 57) 00:09:02.683 8162.211 - 8221.789: 2.0732% ( 85) 00:09:02.683 8221.789 - 8281.367: 2.9345% ( 113) 00:09:02.683 8281.367 - 8340.945: 3.8796% ( 124) 00:09:02.683 8340.945 - 8400.524: 4.9771% ( 144) 00:09:02.683 8400.524 - 8460.102: 6.1890% ( 159) 00:09:02.683 8460.102 - 8519.680: 7.4695% ( 168) 00:09:02.683 8519.680 - 8579.258: 8.7119% ( 163) 00:09:02.683 8579.258 - 8638.836: 10.1220% ( 185) 00:09:02.683 8638.836 - 8698.415: 11.5168% ( 183) 00:09:02.683 8698.415 - 8757.993: 12.9116% ( 183) 00:09:02.683 8757.993 - 8817.571: 14.3826% ( 193) 00:09:02.683 8817.571 - 8877.149: 16.0290% ( 216) 00:09:02.683 8877.149 - 8936.727: 17.8430% ( 238) 00:09:02.683 8936.727 - 8996.305: 19.8704% ( 266) 00:09:02.683 8996.305 - 9055.884: 22.2180% ( 308) 00:09:02.683 9055.884 - 9115.462: 24.5579% ( 307) 00:09:02.683 9115.462 - 9175.040: 27.2713% ( 356) 00:09:02.683 9175.040 - 9234.618: 30.1067% ( 372) 00:09:02.683 9234.618 - 9294.196: 33.1402% ( 398) 00:09:02.683 9294.196 - 9353.775: 36.5015% ( 441) 00:09:02.683 9353.775 - 9413.353: 39.9619% ( 454) 00:09:02.683 9413.353 - 9472.931: 43.4146% ( 453) 00:09:02.683 9472.931 - 9532.509: 46.9055% ( 458) 00:09:02.683 9532.509 - 9592.087: 50.5869% ( 483) 00:09:02.683 9592.087 - 9651.665: 54.0854% ( 459) 00:09:02.683 9651.665 - 9711.244: 57.7210% ( 477) 00:09:02.683 9711.244 - 9770.822: 61.3643% ( 478) 00:09:02.683 9770.822 - 9830.400: 65.0076% ( 478) 00:09:02.683 9830.400 - 9889.978: 68.6509% ( 478) 00:09:02.683 9889.978 - 9949.556: 72.0884% ( 451) 00:09:02.683 9949.556 - 10009.135: 75.4649% ( 443) 00:09:02.683 10009.135 - 10068.713: 78.4985% ( 398) 00:09:02.683 10068.713 - 10128.291: 81.4101% ( 382) 00:09:02.683 10128.291 - 10187.869: 83.9101% ( 328) 00:09:02.683 10187.869 - 10247.447: 85.9604% ( 269) 00:09:02.683 10247.447 - 10307.025: 87.5076% ( 203) 00:09:02.683 10307.025 - 10366.604: 88.6890% ( 155) 00:09:02.683 10366.604 - 10426.182: 89.7027% ( 133) 00:09:02.683 10426.182 - 10485.760: 90.4802% ( 102) 00:09:02.683 10485.760 - 10545.338: 91.0137% ( 70) 00:09:02.683 10545.338 - 10604.916: 91.3948% ( 50) 00:09:02.683 10604.916 - 10664.495: 91.6921% ( 39) 00:09:02.683 10664.495 - 10724.073: 91.9436% ( 33) 00:09:02.683 10724.073 - 10783.651: 92.1494% ( 27) 00:09:02.683 10783.651 - 10843.229: 92.3628% ( 28) 00:09:02.683 10843.229 - 10902.807: 92.5762% ( 28) 00:09:02.683 10902.807 - 10962.385: 92.8125% ( 31) 00:09:02.683 10962.385 - 11021.964: 93.0335% ( 29) 00:09:02.683 11021.964 - 11081.542: 93.2698% ( 31) 00:09:02.683 11081.542 - 11141.120: 93.4756% ( 27) 00:09:02.683 11141.120 - 11200.698: 93.7043% ( 30) 00:09:02.683 11200.698 - 11260.276: 93.9101% ( 27) 00:09:02.683 11260.276 - 11319.855: 94.1463% ( 31) 00:09:02.683 11319.855 - 11379.433: 94.3293% ( 24) 00:09:02.683 11379.433 - 11439.011: 94.5046% ( 23) 00:09:02.683 11439.011 - 11498.589: 94.7027% ( 26) 00:09:02.683 11498.589 - 11558.167: 94.8399% ( 18) 00:09:02.683 11558.167 - 11617.745: 94.9924% ( 20) 00:09:02.683 11617.745 - 11677.324: 95.1143% ( 16) 00:09:02.683 11677.324 - 11736.902: 95.2744% ( 21) 00:09:02.683 11736.902 - 11796.480: 95.4040% ( 17) 00:09:02.683 11796.480 - 11856.058: 95.5412% ( 18) 00:09:02.683 11856.058 - 11915.636: 95.6936% ( 20) 00:09:02.683 11915.636 - 11975.215: 95.8765% ( 24) 00:09:02.683 11975.215 - 12034.793: 96.0442% ( 22) 00:09:02.683 12034.793 - 12094.371: 96.2043% ( 21) 00:09:02.683 12094.371 - 12153.949: 96.3567% ( 20) 00:09:02.683 12153.949 - 12213.527: 96.5244% ( 22) 00:09:02.683 12213.527 - 12273.105: 96.6616% ( 18) 00:09:02.683 12273.105 - 12332.684: 96.8140% ( 20) 00:09:02.683 12332.684 - 12392.262: 96.9665% ( 20) 00:09:02.683 12392.262 - 12451.840: 97.1113% ( 19) 00:09:02.683 12451.840 - 12511.418: 97.2332% ( 16) 00:09:02.683 12511.418 - 12570.996: 97.3704% ( 18) 00:09:02.683 12570.996 - 12630.575: 97.4924% ( 16) 00:09:02.683 12630.575 - 12690.153: 97.6143% ( 16) 00:09:02.683 12690.153 - 12749.731: 97.7134% ( 13) 00:09:02.683 12749.731 - 12809.309: 97.8277% ( 15) 00:09:02.683 12809.309 - 12868.887: 97.9345% ( 14) 00:09:02.683 12868.887 - 12928.465: 98.0335% ( 13) 00:09:02.683 12928.465 - 12988.044: 98.1098% ( 10) 00:09:02.683 12988.044 - 13047.622: 98.1860% ( 10) 00:09:02.683 13047.622 - 13107.200: 98.2546% ( 9) 00:09:02.683 13107.200 - 13166.778: 98.3308% ( 10) 00:09:02.683 13166.778 - 13226.356: 98.3841% ( 7) 00:09:02.683 13226.356 - 13285.935: 98.4299% ( 6) 00:09:02.683 13285.935 - 13345.513: 98.4756% ( 6) 00:09:02.683 13345.513 - 13405.091: 98.5137% ( 5) 00:09:02.683 13405.091 - 13464.669: 98.5595% ( 6) 00:09:02.683 13464.669 - 13524.247: 98.6052% ( 6) 00:09:02.683 13524.247 - 13583.825: 98.6509% ( 6) 00:09:02.683 13583.825 - 13643.404: 98.6890% ( 5) 00:09:02.683 13643.404 - 13702.982: 98.7424% ( 7) 00:09:02.683 13702.982 - 13762.560: 98.7805% ( 5) 00:09:02.683 13762.560 - 13822.138: 98.8186% ( 5) 00:09:02.683 13822.138 - 13881.716: 98.8567% ( 5) 00:09:02.683 13881.716 - 13941.295: 98.8948% ( 5) 00:09:02.683 13941.295 - 14000.873: 98.9482% ( 7) 00:09:02.683 14000.873 - 14060.451: 98.9939% ( 6) 00:09:02.683 14060.451 - 14120.029: 99.0244% ( 4) 00:09:02.683 18826.705 - 18945.862: 99.0320% ( 1) 00:09:02.683 18945.862 - 19065.018: 99.0549% ( 3) 00:09:02.683 19065.018 - 19184.175: 99.0930% ( 5) 00:09:02.683 19184.175 - 19303.331: 99.1159% ( 3) 00:09:02.683 19303.331 - 19422.487: 99.1540% ( 5) 00:09:02.683 19422.487 - 19541.644: 99.1921% ( 5) 00:09:02.683 19541.644 - 19660.800: 99.2226% ( 4) 00:09:02.683 19660.800 - 19779.956: 99.2607% ( 5) 00:09:02.683 19779.956 - 19899.113: 99.2912% ( 4) 00:09:02.683 19899.113 - 20018.269: 99.3293% ( 5) 00:09:02.683 20018.269 - 20137.425: 99.3674% ( 5) 00:09:02.683 20137.425 - 20256.582: 99.4055% ( 5) 00:09:02.683 20256.582 - 20375.738: 99.4436% ( 5) 00:09:02.683 20375.738 - 20494.895: 99.4741% ( 4) 00:09:02.683 20494.895 - 20614.051: 99.5046% ( 4) 00:09:02.683 20614.051 - 20733.207: 99.5122% ( 1) 00:09:02.683 25618.618 - 25737.775: 99.5427% ( 4) 00:09:02.683 25737.775 - 25856.931: 99.5808% ( 5) 00:09:02.683 25856.931 - 25976.087: 99.6113% ( 4) 00:09:02.683 25976.087 - 26095.244: 99.6494% ( 5) 00:09:02.683 26095.244 - 26214.400: 99.6799% ( 4) 00:09:02.683 26214.400 - 26333.556: 99.7104% ( 4) 00:09:02.683 26333.556 - 26452.713: 99.7485% ( 5) 00:09:02.683 26452.713 - 26571.869: 99.7866% ( 5) 00:09:02.683 26571.869 - 26691.025: 99.8171% ( 4) 00:09:02.683 26691.025 - 26810.182: 99.8552% ( 5) 00:09:02.683 26810.182 - 26929.338: 99.8933% ( 5) 00:09:02.683 26929.338 - 27048.495: 99.9314% ( 5) 00:09:02.683 27048.495 - 27167.651: 99.9695% ( 5) 00:09:02.683 27167.651 - 27286.807: 100.0000% ( 4) 00:09:02.683 00:09:02.683 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:02.683 ============================================================================== 00:09:02.683 Range in us Cumulative IO count 00:09:02.683 3902.371 - 3932.160: 0.0076% ( 1) 00:09:02.683 3932.160 - 3961.949: 0.0229% ( 2) 00:09:02.683 3961.949 - 3991.738: 0.0381% ( 2) 00:09:02.683 3991.738 - 4021.527: 0.0610% ( 3) 00:09:02.683 4021.527 - 4051.316: 0.0838% ( 3) 00:09:02.683 4051.316 - 4081.105: 0.1067% ( 3) 00:09:02.683 4081.105 - 4110.895: 0.1296% ( 3) 00:09:02.683 4110.895 - 4140.684: 0.1448% ( 2) 00:09:02.683 4140.684 - 4170.473: 0.1677% ( 3) 00:09:02.683 4170.473 - 4200.262: 0.1905% ( 3) 00:09:02.683 4200.262 - 4230.051: 0.2134% ( 3) 00:09:02.683 4230.051 - 4259.840: 0.2363% ( 3) 00:09:02.683 4259.840 - 4289.629: 0.2515% ( 2) 00:09:02.683 4289.629 - 4319.418: 0.2744% ( 3) 00:09:02.683 4319.418 - 4349.207: 0.2973% ( 3) 00:09:02.684 4349.207 - 4378.996: 0.3125% ( 2) 00:09:02.684 4378.996 - 4408.785: 0.3354% ( 3) 00:09:02.684 4408.785 - 4438.575: 0.3582% ( 3) 00:09:02.684 4438.575 - 4468.364: 0.3735% ( 2) 00:09:02.684 4468.364 - 4498.153: 0.3963% ( 3) 00:09:02.684 4498.153 - 4527.942: 0.4192% ( 3) 00:09:02.684 4527.942 - 4557.731: 0.4345% ( 2) 00:09:02.684 4557.731 - 4587.520: 0.4573% ( 3) 00:09:02.684 4587.520 - 4617.309: 0.4802% ( 3) 00:09:02.684 4617.309 - 4647.098: 0.4878% ( 1) 00:09:02.684 7179.171 - 7208.960: 0.5107% ( 3) 00:09:02.684 7208.960 - 7238.749: 0.5259% ( 2) 00:09:02.684 7238.749 - 7268.538: 0.5488% ( 3) 00:09:02.684 7268.538 - 7298.327: 0.5716% ( 3) 00:09:02.684 7298.327 - 7328.116: 0.5945% ( 3) 00:09:02.684 7328.116 - 7357.905: 0.6098% ( 2) 00:09:02.684 7357.905 - 7387.695: 0.6326% ( 3) 00:09:02.684 7387.695 - 7417.484: 0.6555% ( 3) 00:09:02.684 7417.484 - 7447.273: 0.6631% ( 1) 00:09:02.684 7447.273 - 7477.062: 0.6860% ( 3) 00:09:02.684 7477.062 - 7506.851: 0.7012% ( 2) 00:09:02.684 7506.851 - 7536.640: 0.7241% ( 3) 00:09:02.684 7536.640 - 7566.429: 0.7393% ( 2) 00:09:02.684 7566.429 - 7596.218: 0.7622% ( 3) 00:09:02.684 7596.218 - 7626.007: 0.7851% ( 3) 00:09:02.684 7626.007 - 7685.585: 0.8232% ( 5) 00:09:02.684 7685.585 - 7745.164: 0.8613% ( 5) 00:09:02.684 7745.164 - 7804.742: 0.8994% ( 5) 00:09:02.684 7804.742 - 7864.320: 0.9375% ( 5) 00:09:02.684 7864.320 - 7923.898: 0.9756% ( 5) 00:09:02.684 7923.898 - 7983.476: 0.9909% ( 2) 00:09:02.684 7983.476 - 8043.055: 1.0213% ( 4) 00:09:02.684 8043.055 - 8102.633: 1.2043% ( 24) 00:09:02.684 8102.633 - 8162.211: 1.5473% ( 45) 00:09:02.684 8162.211 - 8221.789: 2.1037% ( 73) 00:09:02.684 8221.789 - 8281.367: 2.9345% ( 109) 00:09:02.684 8281.367 - 8340.945: 3.8415% ( 119) 00:09:02.684 8340.945 - 8400.524: 4.9390% ( 144) 00:09:02.684 8400.524 - 8460.102: 6.1738% ( 162) 00:09:02.684 8460.102 - 8519.680: 7.4924% ( 173) 00:09:02.684 8519.680 - 8579.258: 8.8643% ( 180) 00:09:02.684 8579.258 - 8638.836: 10.1905% ( 174) 00:09:02.684 8638.836 - 8698.415: 11.5549% ( 179) 00:09:02.684 8698.415 - 8757.993: 13.0640% ( 198) 00:09:02.684 8757.993 - 8817.571: 14.5579% ( 196) 00:09:02.684 8817.571 - 8877.149: 16.2119% ( 217) 00:09:02.684 8877.149 - 8936.727: 18.0030% ( 235) 00:09:02.684 8936.727 - 8996.305: 20.0991% ( 275) 00:09:02.684 8996.305 - 9055.884: 22.4162% ( 304) 00:09:02.684 9055.884 - 9115.462: 24.9924% ( 338) 00:09:02.684 9115.462 - 9175.040: 27.6753% ( 352) 00:09:02.684 9175.040 - 9234.618: 30.5488% ( 377) 00:09:02.684 9234.618 - 9294.196: 33.6052% ( 401) 00:09:02.684 9294.196 - 9353.775: 36.7988% ( 419) 00:09:02.684 9353.775 - 9413.353: 40.2515% ( 453) 00:09:02.684 9413.353 - 9472.931: 43.8567% ( 473) 00:09:02.684 9472.931 - 9532.509: 47.4466% ( 471) 00:09:02.684 9532.509 - 9592.087: 51.0747% ( 476) 00:09:02.684 9592.087 - 9651.665: 54.7332% ( 480) 00:09:02.684 9651.665 - 9711.244: 58.3460% ( 474) 00:09:02.684 9711.244 - 9770.822: 61.8979% ( 466) 00:09:02.684 9770.822 - 9830.400: 65.4726% ( 469) 00:09:02.684 9830.400 - 9889.978: 68.9329% ( 454) 00:09:02.684 9889.978 - 9949.556: 72.3399% ( 447) 00:09:02.684 9949.556 - 10009.135: 75.6402% ( 433) 00:09:02.684 10009.135 - 10068.713: 78.7576% ( 409) 00:09:02.684 10068.713 - 10128.291: 81.6006% ( 373) 00:09:02.684 10128.291 - 10187.869: 83.9253% ( 305) 00:09:02.684 10187.869 - 10247.447: 85.8003% ( 246) 00:09:02.684 10247.447 - 10307.025: 87.3476% ( 203) 00:09:02.684 10307.025 - 10366.604: 88.5061% ( 152) 00:09:02.684 10366.604 - 10426.182: 89.4588% ( 125) 00:09:02.684 10426.182 - 10485.760: 90.1601% ( 92) 00:09:02.684 10485.760 - 10545.338: 90.7012% ( 71) 00:09:02.684 10545.338 - 10604.916: 91.0976% ( 52) 00:09:02.684 10604.916 - 10664.495: 91.3796% ( 37) 00:09:02.684 10664.495 - 10724.073: 91.6235% ( 32) 00:09:02.684 10724.073 - 10783.651: 91.8979% ( 36) 00:09:02.684 10783.651 - 10843.229: 92.1113% ( 28) 00:09:02.684 10843.229 - 10902.807: 92.3780% ( 35) 00:09:02.684 10902.807 - 10962.385: 92.5686% ( 25) 00:09:02.684 10962.385 - 11021.964: 92.7744% ( 27) 00:09:02.684 11021.964 - 11081.542: 92.9802% ( 27) 00:09:02.684 11081.542 - 11141.120: 93.1936% ( 28) 00:09:02.684 11141.120 - 11200.698: 93.4146% ( 29) 00:09:02.684 11200.698 - 11260.276: 93.6509% ( 31) 00:09:02.684 11260.276 - 11319.855: 93.8948% ( 32) 00:09:02.684 11319.855 - 11379.433: 94.0930% ( 26) 00:09:02.684 11379.433 - 11439.011: 94.3216% ( 30) 00:09:02.684 11439.011 - 11498.589: 94.5579% ( 31) 00:09:02.684 11498.589 - 11558.167: 94.7561% ( 26) 00:09:02.684 11558.167 - 11617.745: 95.0152% ( 34) 00:09:02.684 11617.745 - 11677.324: 95.2134% ( 26) 00:09:02.684 11677.324 - 11736.902: 95.3887% ( 23) 00:09:02.684 11736.902 - 11796.480: 95.5716% ( 24) 00:09:02.684 11796.480 - 11856.058: 95.7698% ( 26) 00:09:02.684 11856.058 - 11915.636: 95.9527% ( 24) 00:09:02.684 11915.636 - 11975.215: 96.1204% ( 22) 00:09:02.684 11975.215 - 12034.793: 96.2576% ( 18) 00:09:02.684 12034.793 - 12094.371: 96.4405% ( 24) 00:09:02.684 12094.371 - 12153.949: 96.6082% ( 22) 00:09:02.684 12153.949 - 12213.527: 96.7530% ( 19) 00:09:02.684 12213.527 - 12273.105: 96.9055% ( 20) 00:09:02.684 12273.105 - 12332.684: 97.0579% ( 20) 00:09:02.684 12332.684 - 12392.262: 97.2027% ( 19) 00:09:02.684 12392.262 - 12451.840: 97.3171% ( 15) 00:09:02.684 12451.840 - 12511.418: 97.4238% ( 14) 00:09:02.684 12511.418 - 12570.996: 97.5610% ( 18) 00:09:02.684 12570.996 - 12630.575: 97.6372% ( 10) 00:09:02.684 12630.575 - 12690.153: 97.6982% ( 8) 00:09:02.684 12690.153 - 12749.731: 97.7439% ( 6) 00:09:02.684 12749.731 - 12809.309: 97.7820% ( 5) 00:09:02.684 12809.309 - 12868.887: 97.8277% ( 6) 00:09:02.684 12868.887 - 12928.465: 97.8659% ( 5) 00:09:02.684 12928.465 - 12988.044: 97.9116% ( 6) 00:09:02.684 12988.044 - 13047.622: 97.9573% ( 6) 00:09:02.684 13047.622 - 13107.200: 98.0030% ( 6) 00:09:02.684 13107.200 - 13166.778: 98.0640% ( 8) 00:09:02.684 13166.778 - 13226.356: 98.1402% ( 10) 00:09:02.684 13226.356 - 13285.935: 98.2165% ( 10) 00:09:02.684 13285.935 - 13345.513: 98.2851% ( 9) 00:09:02.684 13345.513 - 13405.091: 98.3232% ( 5) 00:09:02.684 13405.091 - 13464.669: 98.3765% ( 7) 00:09:02.684 13464.669 - 13524.247: 98.4299% ( 7) 00:09:02.684 13524.247 - 13583.825: 98.4909% ( 8) 00:09:02.684 13583.825 - 13643.404: 98.5366% ( 6) 00:09:02.684 13643.404 - 13702.982: 98.5899% ( 7) 00:09:02.684 13702.982 - 13762.560: 98.6433% ( 7) 00:09:02.684 13762.560 - 13822.138: 98.6890% ( 6) 00:09:02.684 13822.138 - 13881.716: 98.7424% ( 7) 00:09:02.684 13881.716 - 13941.295: 98.7729% ( 4) 00:09:02.684 13941.295 - 14000.873: 98.8262% ( 7) 00:09:02.684 14000.873 - 14060.451: 98.8796% ( 7) 00:09:02.684 14060.451 - 14120.029: 98.9177% ( 5) 00:09:02.684 14120.029 - 14179.607: 98.9634% ( 6) 00:09:02.684 14179.607 - 14239.185: 98.9863% ( 3) 00:09:02.684 14239.185 - 14298.764: 99.0091% ( 3) 00:09:02.684 14298.764 - 14358.342: 99.0244% ( 2) 00:09:02.684 18111.767 - 18230.924: 99.0396% ( 2) 00:09:02.684 18230.924 - 18350.080: 99.0701% ( 4) 00:09:02.684 18350.080 - 18469.236: 99.1082% ( 5) 00:09:02.684 18469.236 - 18588.393: 99.1463% ( 5) 00:09:02.684 18588.393 - 18707.549: 99.1845% ( 5) 00:09:02.684 18707.549 - 18826.705: 99.2149% ( 4) 00:09:02.684 18826.705 - 18945.862: 99.2454% ( 4) 00:09:02.684 18945.862 - 19065.018: 99.2759% ( 4) 00:09:02.684 19065.018 - 19184.175: 99.3140% ( 5) 00:09:02.684 19184.175 - 19303.331: 99.3521% ( 5) 00:09:02.684 19303.331 - 19422.487: 99.3826% ( 4) 00:09:02.684 19422.487 - 19541.644: 99.4207% ( 5) 00:09:02.684 19541.644 - 19660.800: 99.4588% ( 5) 00:09:02.684 19660.800 - 19779.956: 99.4970% ( 5) 00:09:02.684 19779.956 - 19899.113: 99.5122% ( 2) 00:09:02.684 24784.524 - 24903.680: 99.5427% ( 4) 00:09:02.684 24903.680 - 25022.836: 99.5808% ( 5) 00:09:02.684 25022.836 - 25141.993: 99.6189% ( 5) 00:09:02.684 25141.993 - 25261.149: 99.6494% ( 4) 00:09:02.684 25261.149 - 25380.305: 99.6875% ( 5) 00:09:02.684 25380.305 - 25499.462: 99.7256% ( 5) 00:09:02.684 25499.462 - 25618.618: 99.7485% ( 3) 00:09:02.684 25618.618 - 25737.775: 99.7866% ( 5) 00:09:02.685 25737.775 - 25856.931: 99.8247% ( 5) 00:09:02.685 25856.931 - 25976.087: 99.8628% ( 5) 00:09:02.685 25976.087 - 26095.244: 99.9009% ( 5) 00:09:02.685 26095.244 - 26214.400: 99.9314% ( 4) 00:09:02.685 26214.400 - 26333.556: 99.9695% ( 5) 00:09:02.685 26333.556 - 26452.713: 100.0000% ( 4) 00:09:02.685 00:09:02.685 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:02.685 ============================================================================== 00:09:02.685 Range in us Cumulative IO count 00:09:02.685 3440.640 - 3455.535: 0.0076% ( 1) 00:09:02.685 3455.535 - 3470.429: 0.0229% ( 2) 00:09:02.685 3470.429 - 3485.324: 0.0305% ( 1) 00:09:02.685 3485.324 - 3500.218: 0.0381% ( 1) 00:09:02.685 3500.218 - 3515.113: 0.0534% ( 2) 00:09:02.685 3515.113 - 3530.007: 0.0610% ( 1) 00:09:02.685 3530.007 - 3544.902: 0.0762% ( 2) 00:09:02.685 3544.902 - 3559.796: 0.0838% ( 1) 00:09:02.685 3559.796 - 3574.691: 0.0915% ( 1) 00:09:02.685 3574.691 - 3589.585: 0.1067% ( 2) 00:09:02.685 3589.585 - 3604.480: 0.1143% ( 1) 00:09:02.685 3604.480 - 3619.375: 0.1296% ( 2) 00:09:02.685 3619.375 - 3634.269: 0.1372% ( 1) 00:09:02.685 3634.269 - 3649.164: 0.1448% ( 1) 00:09:02.685 3649.164 - 3664.058: 0.1601% ( 2) 00:09:02.685 3664.058 - 3678.953: 0.1677% ( 1) 00:09:02.685 3678.953 - 3693.847: 0.1753% ( 1) 00:09:02.685 3693.847 - 3708.742: 0.1905% ( 2) 00:09:02.685 3708.742 - 3723.636: 0.1982% ( 1) 00:09:02.685 3723.636 - 3738.531: 0.2134% ( 2) 00:09:02.685 3738.531 - 3753.425: 0.2210% ( 1) 00:09:02.685 3753.425 - 3768.320: 0.2287% ( 1) 00:09:02.685 3768.320 - 3783.215: 0.2439% ( 2) 00:09:02.685 3783.215 - 3798.109: 0.2515% ( 1) 00:09:02.685 3798.109 - 3813.004: 0.2668% ( 2) 00:09:02.685 3813.004 - 3842.793: 0.2820% ( 2) 00:09:02.685 3842.793 - 3872.582: 0.2973% ( 2) 00:09:02.685 3872.582 - 3902.371: 0.3201% ( 3) 00:09:02.685 3902.371 - 3932.160: 0.3354% ( 2) 00:09:02.685 3932.160 - 3961.949: 0.3582% ( 3) 00:09:02.685 3961.949 - 3991.738: 0.3811% ( 3) 00:09:02.685 3991.738 - 4021.527: 0.3963% ( 2) 00:09:02.685 4021.527 - 4051.316: 0.4192% ( 3) 00:09:02.685 4051.316 - 4081.105: 0.4421% ( 3) 00:09:02.685 4081.105 - 4110.895: 0.4573% ( 2) 00:09:02.685 4110.895 - 4140.684: 0.4802% ( 3) 00:09:02.685 4140.684 - 4170.473: 0.4878% ( 1) 00:09:02.685 6702.545 - 6732.335: 0.4954% ( 1) 00:09:02.685 6732.335 - 6762.124: 0.5183% ( 3) 00:09:02.685 6762.124 - 6791.913: 0.5335% ( 2) 00:09:02.685 6791.913 - 6821.702: 0.5564% ( 3) 00:09:02.685 6821.702 - 6851.491: 0.5793% ( 3) 00:09:02.685 6851.491 - 6881.280: 0.5945% ( 2) 00:09:02.685 6881.280 - 6911.069: 0.6174% ( 3) 00:09:02.685 6911.069 - 6940.858: 0.6402% ( 3) 00:09:02.685 6940.858 - 6970.647: 0.6555% ( 2) 00:09:02.685 6970.647 - 7000.436: 0.6784% ( 3) 00:09:02.685 7000.436 - 7030.225: 0.7012% ( 3) 00:09:02.685 7030.225 - 7060.015: 0.7165% ( 2) 00:09:02.685 7060.015 - 7089.804: 0.7393% ( 3) 00:09:02.685 7089.804 - 7119.593: 0.7622% ( 3) 00:09:02.685 7119.593 - 7149.382: 0.7774% ( 2) 00:09:02.685 7149.382 - 7179.171: 0.8003% ( 3) 00:09:02.685 7179.171 - 7208.960: 0.8232% ( 3) 00:09:02.685 7208.960 - 7238.749: 0.8384% ( 2) 00:09:02.685 7238.749 - 7268.538: 0.8613% ( 3) 00:09:02.685 7268.538 - 7298.327: 0.8841% ( 3) 00:09:02.685 7298.327 - 7328.116: 0.9070% ( 3) 00:09:02.685 7328.116 - 7357.905: 0.9223% ( 2) 00:09:02.685 7357.905 - 7387.695: 0.9375% ( 2) 00:09:02.685 7387.695 - 7417.484: 0.9604% ( 3) 00:09:02.685 7417.484 - 7447.273: 0.9756% ( 2) 00:09:02.685 7923.898 - 7983.476: 0.9909% ( 2) 00:09:02.685 7983.476 - 8043.055: 1.0137% ( 3) 00:09:02.685 8043.055 - 8102.633: 1.1966% ( 24) 00:09:02.685 8102.633 - 8162.211: 1.5015% ( 40) 00:09:02.685 8162.211 - 8221.789: 1.9970% ( 65) 00:09:02.685 8221.789 - 8281.367: 2.7668% ( 101) 00:09:02.685 8281.367 - 8340.945: 3.7348% ( 127) 00:09:02.685 8340.945 - 8400.524: 4.9085% ( 154) 00:09:02.685 8400.524 - 8460.102: 6.2729% ( 179) 00:09:02.685 8460.102 - 8519.680: 7.5534% ( 168) 00:09:02.685 8519.680 - 8579.258: 8.9101% ( 178) 00:09:02.685 8579.258 - 8638.836: 10.2820% ( 180) 00:09:02.685 8638.836 - 8698.415: 11.6540% ( 180) 00:09:02.685 8698.415 - 8757.993: 13.0945% ( 189) 00:09:02.685 8757.993 - 8817.571: 14.5274% ( 188) 00:09:02.685 8817.571 - 8877.149: 16.0595% ( 201) 00:09:02.685 8877.149 - 8936.727: 17.8049% ( 229) 00:09:02.685 8936.727 - 8996.305: 19.8247% ( 265) 00:09:02.685 8996.305 - 9055.884: 22.1189% ( 301) 00:09:02.685 9055.884 - 9115.462: 24.7027% ( 339) 00:09:02.685 9115.462 - 9175.040: 27.4695% ( 363) 00:09:02.685 9175.040 - 9234.618: 30.3582% ( 379) 00:09:02.685 9234.618 - 9294.196: 33.3994% ( 399) 00:09:02.685 9294.196 - 9353.775: 36.7149% ( 435) 00:09:02.685 9353.775 - 9413.353: 40.1753% ( 454) 00:09:02.685 9413.353 - 9472.931: 43.8567% ( 483) 00:09:02.685 9472.931 - 9532.509: 47.3933% ( 464) 00:09:02.685 9532.509 - 9592.087: 51.0671% ( 482) 00:09:02.685 9592.087 - 9651.665: 54.7561% ( 484) 00:09:02.685 9651.665 - 9711.244: 58.3384% ( 470) 00:09:02.685 9711.244 - 9770.822: 62.0732% ( 490) 00:09:02.685 9770.822 - 9830.400: 65.5716% ( 459) 00:09:02.685 9830.400 - 9889.978: 69.1616% ( 471) 00:09:02.685 9889.978 - 9949.556: 72.6143% ( 453) 00:09:02.685 9949.556 - 10009.135: 75.9832% ( 442) 00:09:02.685 10009.135 - 10068.713: 79.1159% ( 411) 00:09:02.685 10068.713 - 10128.291: 82.0274% ( 382) 00:09:02.685 10128.291 - 10187.869: 84.3979% ( 311) 00:09:02.685 10187.869 - 10247.447: 86.2729% ( 246) 00:09:02.685 10247.447 - 10307.025: 87.7668% ( 196) 00:09:02.685 10307.025 - 10366.604: 88.9939% ( 161) 00:09:02.685 10366.604 - 10426.182: 89.8552% ( 113) 00:09:02.685 10426.182 - 10485.760: 90.4726% ( 81) 00:09:02.685 10485.760 - 10545.338: 90.8841% ( 54) 00:09:02.685 10545.338 - 10604.916: 91.2195% ( 44) 00:09:02.685 10604.916 - 10664.495: 91.4787% ( 34) 00:09:02.685 10664.495 - 10724.073: 91.6921% ( 28) 00:09:02.685 10724.073 - 10783.651: 91.8826% ( 25) 00:09:02.685 10783.651 - 10843.229: 92.0884% ( 27) 00:09:02.685 10843.229 - 10902.807: 92.2866% ( 26) 00:09:02.685 10902.807 - 10962.385: 92.4848% ( 26) 00:09:02.685 10962.385 - 11021.964: 92.6829% ( 26) 00:09:02.685 11021.964 - 11081.542: 92.8963% ( 28) 00:09:02.685 11081.542 - 11141.120: 93.1021% ( 27) 00:09:02.685 11141.120 - 11200.698: 93.2927% ( 25) 00:09:02.685 11200.698 - 11260.276: 93.5137% ( 29) 00:09:02.685 11260.276 - 11319.855: 93.7195% ( 27) 00:09:02.685 11319.855 - 11379.433: 93.9405% ( 29) 00:09:02.685 11379.433 - 11439.011: 94.1768% ( 31) 00:09:02.685 11439.011 - 11498.589: 94.4436% ( 35) 00:09:02.685 11498.589 - 11558.167: 94.6875% ( 32) 00:09:02.685 11558.167 - 11617.745: 94.9238% ( 31) 00:09:02.685 11617.745 - 11677.324: 95.1448% ( 29) 00:09:02.685 11677.324 - 11736.902: 95.3659% ( 29) 00:09:02.685 11736.902 - 11796.480: 95.5945% ( 30) 00:09:02.685 11796.480 - 11856.058: 95.7774% ( 24) 00:09:02.685 11856.058 - 11915.636: 95.9680% ( 25) 00:09:02.685 11915.636 - 11975.215: 96.1204% ( 20) 00:09:02.685 11975.215 - 12034.793: 96.2500% ( 17) 00:09:02.685 12034.793 - 12094.371: 96.4101% ( 21) 00:09:02.685 12094.371 - 12153.949: 96.5549% ( 19) 00:09:02.685 12153.949 - 12213.527: 96.6921% ( 18) 00:09:02.685 12213.527 - 12273.105: 96.8521% ( 21) 00:09:02.685 12273.105 - 12332.684: 97.0046% ( 20) 00:09:02.685 12332.684 - 12392.262: 97.1037% ( 13) 00:09:02.685 12392.262 - 12451.840: 97.2256% ( 16) 00:09:02.685 12451.840 - 12511.418: 97.3628% ( 18) 00:09:02.685 12511.418 - 12570.996: 97.5000% ( 18) 00:09:02.685 12570.996 - 12630.575: 97.6067% ( 14) 00:09:02.685 12630.575 - 12690.153: 97.6905% ( 11) 00:09:02.685 12690.153 - 12749.731: 97.7820% ( 12) 00:09:02.685 12749.731 - 12809.309: 97.8430% ( 8) 00:09:02.685 12809.309 - 12868.887: 97.9040% ( 8) 00:09:02.685 12868.887 - 12928.465: 97.9726% ( 9) 00:09:02.685 12928.465 - 12988.044: 98.0412% ( 9) 00:09:02.685 12988.044 - 13047.622: 98.0945% ( 7) 00:09:02.685 13047.622 - 13107.200: 98.1555% ( 8) 00:09:02.685 13107.200 - 13166.778: 98.2012% ( 6) 00:09:02.685 13166.778 - 13226.356: 98.2470% ( 6) 00:09:02.685 13226.356 - 13285.935: 98.3155% ( 9) 00:09:02.685 13285.935 - 13345.513: 98.3765% ( 8) 00:09:02.685 13345.513 - 13405.091: 98.4146% ( 5) 00:09:02.685 13405.091 - 13464.669: 98.4527% ( 5) 00:09:02.685 13464.669 - 13524.247: 98.4985% ( 6) 00:09:02.685 13524.247 - 13583.825: 98.5366% ( 5) 00:09:02.685 13583.825 - 13643.404: 98.5823% ( 6) 00:09:02.685 13643.404 - 13702.982: 98.6280% ( 6) 00:09:02.685 13702.982 - 13762.560: 98.6738% ( 6) 00:09:02.685 13762.560 - 13822.138: 98.7119% ( 5) 00:09:02.685 13822.138 - 13881.716: 98.7576% ( 6) 00:09:02.685 13881.716 - 13941.295: 98.8034% ( 6) 00:09:02.685 13941.295 - 14000.873: 98.8491% ( 6) 00:09:02.685 14000.873 - 14060.451: 98.8948% ( 6) 00:09:02.685 14060.451 - 14120.029: 98.9329% ( 5) 00:09:02.685 14120.029 - 14179.607: 98.9558% ( 3) 00:09:02.685 14179.607 - 14239.185: 98.9863% ( 4) 00:09:02.685 14239.185 - 14298.764: 99.0091% ( 3) 00:09:02.685 14298.764 - 14358.342: 99.0244% ( 2) 00:09:02.685 17158.516 - 17277.673: 99.0396% ( 2) 00:09:02.685 17277.673 - 17396.829: 99.0701% ( 4) 00:09:02.685 17396.829 - 17515.985: 99.1006% ( 4) 00:09:02.685 17515.985 - 17635.142: 99.1311% ( 4) 00:09:02.685 17635.142 - 17754.298: 99.1692% ( 5) 00:09:02.685 17754.298 - 17873.455: 99.1997% ( 4) 00:09:02.685 17873.455 - 17992.611: 99.2378% ( 5) 00:09:02.685 17992.611 - 18111.767: 99.2759% ( 5) 00:09:02.685 18111.767 - 18230.924: 99.3064% ( 4) 00:09:02.685 18230.924 - 18350.080: 99.3445% ( 5) 00:09:02.685 18350.080 - 18469.236: 99.3826% ( 5) 00:09:02.685 18469.236 - 18588.393: 99.3979% ( 2) 00:09:02.685 18588.393 - 18707.549: 99.4284% ( 4) 00:09:02.685 18707.549 - 18826.705: 99.4665% ( 5) 00:09:02.685 18826.705 - 18945.862: 99.5046% ( 5) 00:09:02.685 18945.862 - 19065.018: 99.5122% ( 1) 00:09:02.685 23950.429 - 24069.585: 99.5503% ( 5) 00:09:02.685 24069.585 - 24188.742: 99.5884% ( 5) 00:09:02.685 24188.742 - 24307.898: 99.6189% ( 4) 00:09:02.685 24307.898 - 24427.055: 99.6494% ( 4) 00:09:02.686 24427.055 - 24546.211: 99.6875% ( 5) 00:09:02.686 24546.211 - 24665.367: 99.7180% ( 4) 00:09:02.686 24665.367 - 24784.524: 99.7485% ( 4) 00:09:02.686 24784.524 - 24903.680: 99.7866% ( 5) 00:09:02.686 24903.680 - 25022.836: 99.8171% ( 4) 00:09:02.686 25022.836 - 25141.993: 99.8552% ( 5) 00:09:02.686 25141.993 - 25261.149: 99.8933% ( 5) 00:09:02.686 25261.149 - 25380.305: 99.9238% ( 4) 00:09:02.686 25380.305 - 25499.462: 99.9619% ( 5) 00:09:02.686 25499.462 - 25618.618: 100.0000% ( 5) 00:09:02.686 00:09:02.686 05:55:54 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:03.622 Initializing NVMe Controllers 00:09:03.622 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:03.622 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:03.622 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:03.622 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:03.622 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:03.622 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:03.622 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:03.622 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:03.622 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:03.622 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:03.622 Initialization complete. Launching workers. 00:09:03.622 ======================================================== 00:09:03.622 Latency(us) 00:09:03.622 Device Information : IOPS MiB/s Average min max 00:09:03.622 PCIE (0000:00:10.0) NSID 1 from core 0: 11897.08 139.42 10762.80 6725.36 34022.61 00:09:03.622 PCIE (0000:00:11.0) NSID 1 from core 0: 11897.08 139.42 10750.65 6603.84 33053.31 00:09:03.622 PCIE (0000:00:13.0) NSID 1 from core 0: 11897.08 139.42 10736.88 5473.37 33031.79 00:09:03.622 PCIE (0000:00:12.0) NSID 1 from core 0: 11897.08 139.42 10723.44 4996.66 32234.00 00:09:03.622 PCIE (0000:00:12.0) NSID 2 from core 0: 11961.04 140.17 10651.73 4552.84 25796.51 00:09:03.622 PCIE (0000:00:12.0) NSID 3 from core 0: 11961.04 140.17 10637.56 4101.53 24937.38 00:09:03.622 ======================================================== 00:09:03.622 Total : 71510.38 838.01 10710.39 4101.53 34022.61 00:09:03.622 00:09:03.622 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:03.622 ================================================================================= 00:09:03.622 1.00000% : 9175.040us 00:09:03.622 10.00000% : 9651.665us 00:09:03.622 25.00000% : 10009.135us 00:09:03.622 50.00000% : 10426.182us 00:09:03.622 75.00000% : 10962.385us 00:09:03.622 90.00000% : 11856.058us 00:09:03.622 95.00000% : 12690.153us 00:09:03.622 98.00000% : 13643.404us 00:09:03.622 99.00000% : 25856.931us 00:09:03.622 99.50000% : 32648.844us 00:09:03.622 99.90000% : 33840.407us 00:09:03.622 99.99000% : 34078.720us 00:09:03.622 99.99900% : 34078.720us 00:09:03.622 99.99990% : 34078.720us 00:09:03.622 99.99999% : 34078.720us 00:09:03.622 00:09:03.622 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:03.622 ================================================================================= 00:09:03.622 1.00000% : 9353.775us 00:09:03.622 10.00000% : 9770.822us 00:09:03.622 25.00000% : 10068.713us 00:09:03.622 50.00000% : 10426.182us 00:09:03.622 75.00000% : 10902.807us 00:09:03.622 90.00000% : 11796.480us 00:09:03.622 95.00000% : 12570.996us 00:09:03.622 98.00000% : 13643.404us 00:09:03.622 99.00000% : 25380.305us 00:09:03.622 99.50000% : 31933.905us 00:09:03.622 99.90000% : 32887.156us 00:09:03.622 99.99000% : 33125.469us 00:09:03.622 99.99900% : 33125.469us 00:09:03.622 99.99990% : 33125.469us 00:09:03.622 99.99999% : 33125.469us 00:09:03.622 00:09:03.622 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:03.622 ================================================================================= 00:09:03.622 1.00000% : 9234.618us 00:09:03.622 10.00000% : 9770.822us 00:09:03.622 25.00000% : 10068.713us 00:09:03.622 50.00000% : 10426.182us 00:09:03.622 75.00000% : 10902.807us 00:09:03.622 90.00000% : 11796.480us 00:09:03.622 95.00000% : 12570.996us 00:09:03.622 98.00000% : 13524.247us 00:09:03.622 99.00000% : 25499.462us 00:09:03.622 99.50000% : 31933.905us 00:09:03.622 99.90000% : 32887.156us 00:09:03.622 99.99000% : 33125.469us 00:09:03.622 99.99900% : 33125.469us 00:09:03.622 99.99990% : 33125.469us 00:09:03.622 99.99999% : 33125.469us 00:09:03.622 00:09:03.622 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:03.622 ================================================================================= 00:09:03.622 1.00000% : 8757.993us 00:09:03.622 10.00000% : 9770.822us 00:09:03.622 25.00000% : 10068.713us 00:09:03.622 50.00000% : 10426.182us 00:09:03.622 75.00000% : 10902.807us 00:09:03.622 90.00000% : 11796.480us 00:09:03.622 95.00000% : 12630.575us 00:09:03.622 98.00000% : 13405.091us 00:09:03.622 99.00000% : 24665.367us 00:09:03.622 99.50000% : 31218.967us 00:09:03.622 99.90000% : 32172.218us 00:09:03.622 99.99000% : 32410.531us 00:09:03.622 99.99900% : 32410.531us 00:09:03.622 99.99990% : 32410.531us 00:09:03.622 99.99999% : 32410.531us 00:09:03.622 00:09:03.622 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:03.622 ================================================================================= 00:09:03.622 1.00000% : 8400.524us 00:09:03.622 10.00000% : 9770.822us 00:09:03.622 25.00000% : 10068.713us 00:09:03.622 50.00000% : 10426.182us 00:09:03.622 75.00000% : 10902.807us 00:09:03.622 90.00000% : 11796.480us 00:09:03.622 95.00000% : 12630.575us 00:09:03.622 98.00000% : 13583.825us 00:09:03.622 99.00000% : 17873.455us 00:09:03.622 99.50000% : 24665.367us 00:09:03.622 99.90000% : 25618.618us 00:09:03.622 99.99000% : 25856.931us 00:09:03.622 99.99900% : 25856.931us 00:09:03.622 99.99990% : 25856.931us 00:09:03.622 99.99999% : 25856.931us 00:09:03.622 00:09:03.622 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:03.622 ================================================================================= 00:09:03.622 1.00000% : 8043.055us 00:09:03.622 10.00000% : 9770.822us 00:09:03.622 25.00000% : 10068.713us 00:09:03.622 50.00000% : 10426.182us 00:09:03.622 75.00000% : 10902.807us 00:09:03.622 90.00000% : 11796.480us 00:09:03.622 95.00000% : 12749.731us 00:09:03.622 98.00000% : 13643.404us 00:09:03.622 99.00000% : 16920.204us 00:09:03.622 99.50000% : 23712.116us 00:09:03.622 99.90000% : 24784.524us 00:09:03.622 99.99000% : 24903.680us 00:09:03.622 99.99900% : 25022.836us 00:09:03.622 99.99990% : 25022.836us 00:09:03.622 99.99999% : 25022.836us 00:09:03.622 00:09:03.622 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:03.622 ============================================================================== 00:09:03.623 Range in us Cumulative IO count 00:09:03.623 6702.545 - 6732.335: 0.0168% ( 2) 00:09:03.623 6732.335 - 6762.124: 0.0252% ( 1) 00:09:03.623 6762.124 - 6791.913: 0.0420% ( 2) 00:09:03.623 6791.913 - 6821.702: 0.0504% ( 1) 00:09:03.623 6821.702 - 6851.491: 0.0756% ( 3) 00:09:03.623 6851.491 - 6881.280: 0.0924% ( 2) 00:09:03.623 6881.280 - 6911.069: 0.1008% ( 1) 00:09:03.623 6911.069 - 6940.858: 0.1176% ( 2) 00:09:03.623 6940.858 - 6970.647: 0.1344% ( 2) 00:09:03.623 6970.647 - 7000.436: 0.1596% ( 3) 00:09:03.623 7000.436 - 7030.225: 0.1680% ( 1) 00:09:03.623 7030.225 - 7060.015: 0.1848% ( 2) 00:09:03.623 7060.015 - 7089.804: 0.2016% ( 2) 00:09:03.623 7089.804 - 7119.593: 0.2184% ( 2) 00:09:03.623 7119.593 - 7149.382: 0.2352% ( 2) 00:09:03.623 7149.382 - 7179.171: 0.2520% ( 2) 00:09:03.623 7179.171 - 7208.960: 0.2688% ( 2) 00:09:03.623 7208.960 - 7238.749: 0.2856% ( 2) 00:09:03.623 7238.749 - 7268.538: 0.3024% ( 2) 00:09:03.623 7268.538 - 7298.327: 0.3192% ( 2) 00:09:03.623 7298.327 - 7328.116: 0.3360% ( 2) 00:09:03.623 7328.116 - 7357.905: 0.3528% ( 2) 00:09:03.623 7357.905 - 7387.695: 0.3696% ( 2) 00:09:03.623 7387.695 - 7417.484: 0.3864% ( 2) 00:09:03.623 7417.484 - 7447.273: 0.3948% ( 1) 00:09:03.623 7447.273 - 7477.062: 0.4116% ( 2) 00:09:03.623 7477.062 - 7506.851: 0.4200% ( 1) 00:09:03.623 7506.851 - 7536.640: 0.4452% ( 3) 00:09:03.623 7566.429 - 7596.218: 0.4704% ( 3) 00:09:03.623 7596.218 - 7626.007: 0.4788% ( 1) 00:09:03.623 7626.007 - 7685.585: 0.5208% ( 5) 00:09:03.623 7685.585 - 7745.164: 0.5376% ( 2) 00:09:03.623 8936.727 - 8996.305: 0.5460% ( 1) 00:09:03.623 8996.305 - 9055.884: 0.6132% ( 8) 00:09:03.623 9055.884 - 9115.462: 0.7560% ( 17) 00:09:03.623 9115.462 - 9175.040: 1.0837% ( 39) 00:09:03.623 9175.040 - 9234.618: 1.3189% ( 28) 00:09:03.623 9234.618 - 9294.196: 1.9237% ( 72) 00:09:03.623 9294.196 - 9353.775: 2.4866% ( 67) 00:09:03.623 9353.775 - 9413.353: 3.2342% ( 89) 00:09:03.623 9413.353 - 9472.931: 4.4271% ( 142) 00:09:03.623 9472.931 - 9532.509: 6.0064% ( 188) 00:09:03.623 9532.509 - 9592.087: 7.9805% ( 235) 00:09:03.623 9592.087 - 9651.665: 10.0554% ( 247) 00:09:03.623 9651.665 - 9711.244: 12.2480% ( 261) 00:09:03.623 9711.244 - 9770.822: 14.7933% ( 303) 00:09:03.623 9770.822 - 9830.400: 17.4983% ( 322) 00:09:03.623 9830.400 - 9889.978: 20.4637% ( 353) 00:09:03.623 9889.978 - 9949.556: 23.4123% ( 351) 00:09:03.623 9949.556 - 10009.135: 26.5625% ( 375) 00:09:03.623 10009.135 - 10068.713: 29.7547% ( 380) 00:09:03.623 10068.713 - 10128.291: 33.0897% ( 397) 00:09:03.623 10128.291 - 10187.869: 36.3911% ( 393) 00:09:03.623 10187.869 - 10247.447: 39.7681% ( 402) 00:09:03.623 10247.447 - 10307.025: 43.2544% ( 415) 00:09:03.623 10307.025 - 10366.604: 46.6314% ( 402) 00:09:03.623 10366.604 - 10426.182: 50.1092% ( 414) 00:09:03.623 10426.182 - 10485.760: 53.4022% ( 392) 00:09:03.623 10485.760 - 10545.338: 56.5356% ( 373) 00:09:03.623 10545.338 - 10604.916: 59.7614% ( 384) 00:09:03.623 10604.916 - 10664.495: 62.9032% ( 374) 00:09:03.623 10664.495 - 10724.073: 65.8686% ( 353) 00:09:03.623 10724.073 - 10783.651: 68.8592% ( 356) 00:09:03.623 10783.651 - 10843.229: 71.5054% ( 315) 00:09:03.623 10843.229 - 10902.807: 73.9835% ( 295) 00:09:03.623 10902.807 - 10962.385: 76.1593% ( 259) 00:09:03.623 10962.385 - 11021.964: 78.2342% ( 247) 00:09:03.623 11021.964 - 11081.542: 79.9899% ( 209) 00:09:03.623 11081.542 - 11141.120: 81.4852% ( 178) 00:09:03.623 11141.120 - 11200.698: 82.7117% ( 146) 00:09:03.623 11200.698 - 11260.276: 83.8374% ( 134) 00:09:03.623 11260.276 - 11319.855: 84.7194% ( 105) 00:09:03.623 11319.855 - 11379.433: 85.5847% ( 103) 00:09:03.623 11379.433 - 11439.011: 86.4079% ( 98) 00:09:03.623 11439.011 - 11498.589: 87.1304% ( 86) 00:09:03.623 11498.589 - 11558.167: 87.7520% ( 74) 00:09:03.623 11558.167 - 11617.745: 88.3317% ( 69) 00:09:03.623 11617.745 - 11677.324: 88.8525% ( 62) 00:09:03.623 11677.324 - 11736.902: 89.2893% ( 52) 00:09:03.623 11736.902 - 11796.480: 89.7429% ( 54) 00:09:03.623 11796.480 - 11856.058: 90.2134% ( 56) 00:09:03.623 11856.058 - 11915.636: 90.6670% ( 54) 00:09:03.623 11915.636 - 11975.215: 91.0198% ( 42) 00:09:03.623 11975.215 - 12034.793: 91.4399% ( 50) 00:09:03.623 12034.793 - 12094.371: 91.9019% ( 55) 00:09:03.623 12094.371 - 12153.949: 92.2883% ( 46) 00:09:03.623 12153.949 - 12213.527: 92.7251% ( 52) 00:09:03.623 12213.527 - 12273.105: 93.0864% ( 43) 00:09:03.623 12273.105 - 12332.684: 93.4980% ( 49) 00:09:03.623 12332.684 - 12392.262: 93.7584% ( 31) 00:09:03.623 12392.262 - 12451.840: 94.0776% ( 38) 00:09:03.623 12451.840 - 12511.418: 94.4052% ( 39) 00:09:03.623 12511.418 - 12570.996: 94.7329% ( 39) 00:09:03.623 12570.996 - 12630.575: 94.9933% ( 31) 00:09:03.623 12630.575 - 12690.153: 95.3125% ( 38) 00:09:03.623 12690.153 - 12749.731: 95.5729% ( 31) 00:09:03.623 12749.731 - 12809.309: 95.8669% ( 35) 00:09:03.623 12809.309 - 12868.887: 96.0853% ( 26) 00:09:03.623 12868.887 - 12928.465: 96.3374% ( 30) 00:09:03.623 12928.465 - 12988.044: 96.5306% ( 23) 00:09:03.623 12988.044 - 13047.622: 96.7574% ( 27) 00:09:03.623 13047.622 - 13107.200: 96.9674% ( 25) 00:09:03.623 13107.200 - 13166.778: 97.1690% ( 24) 00:09:03.623 13166.778 - 13226.356: 97.3622% ( 23) 00:09:03.623 13226.356 - 13285.935: 97.4798% ( 14) 00:09:03.623 13285.935 - 13345.513: 97.5722% ( 11) 00:09:03.623 13345.513 - 13405.091: 97.6478% ( 9) 00:09:03.623 13405.091 - 13464.669: 97.7907% ( 17) 00:09:03.623 13464.669 - 13524.247: 97.9083% ( 14) 00:09:03.623 13524.247 - 13583.825: 97.9839% ( 9) 00:09:03.623 13583.825 - 13643.404: 98.0679% ( 10) 00:09:03.623 13643.404 - 13702.982: 98.1519% ( 10) 00:09:03.623 13702.982 - 13762.560: 98.2443% ( 11) 00:09:03.623 13762.560 - 13822.138: 98.3115% ( 8) 00:09:03.623 13822.138 - 13881.716: 98.3703% ( 7) 00:09:03.623 13881.716 - 13941.295: 98.4375% ( 8) 00:09:03.623 13941.295 - 14000.873: 98.4963% ( 7) 00:09:03.623 14000.873 - 14060.451: 98.5719% ( 9) 00:09:03.623 14060.451 - 14120.029: 98.6223% ( 6) 00:09:03.623 14120.029 - 14179.607: 98.6811% ( 7) 00:09:03.623 14179.607 - 14239.185: 98.7231% ( 5) 00:09:03.623 14239.185 - 14298.764: 98.7567% ( 4) 00:09:03.623 14298.764 - 14358.342: 98.7819% ( 3) 00:09:03.623 14358.342 - 14417.920: 98.8071% ( 3) 00:09:03.623 14417.920 - 14477.498: 98.8323% ( 3) 00:09:03.623 14477.498 - 14537.076: 98.8407% ( 1) 00:09:03.623 14537.076 - 14596.655: 98.8743% ( 4) 00:09:03.623 14596.655 - 14656.233: 98.8911% ( 2) 00:09:03.623 14656.233 - 14715.811: 98.9163% ( 3) 00:09:03.623 14715.811 - 14775.389: 98.9247% ( 1) 00:09:03.623 25499.462 - 25618.618: 98.9415% ( 2) 00:09:03.623 25618.618 - 25737.775: 98.9835% ( 5) 00:09:03.623 25737.775 - 25856.931: 99.0339% ( 6) 00:09:03.623 25856.931 - 25976.087: 99.0759% ( 5) 00:09:03.623 25976.087 - 26095.244: 99.1179% ( 5) 00:09:03.623 26095.244 - 26214.400: 99.1599% ( 5) 00:09:03.623 26214.400 - 26333.556: 99.2103% ( 6) 00:09:03.623 26333.556 - 26452.713: 99.2524% ( 5) 00:09:03.623 26452.713 - 26571.869: 99.2944% ( 5) 00:09:03.623 26571.869 - 26691.025: 99.3364% ( 5) 00:09:03.623 26691.025 - 26810.182: 99.3868% ( 6) 00:09:03.623 26810.182 - 26929.338: 99.4372% ( 6) 00:09:03.623 26929.338 - 27048.495: 99.4624% ( 3) 00:09:03.623 32410.531 - 32648.844: 99.5128% ( 6) 00:09:03.623 32648.844 - 32887.156: 99.6052% ( 11) 00:09:03.623 32887.156 - 33125.469: 99.6808% ( 9) 00:09:03.623 33125.469 - 33363.782: 99.7648% ( 10) 00:09:03.623 33363.782 - 33602.095: 99.8740% ( 13) 00:09:03.623 33602.095 - 33840.407: 99.9580% ( 10) 00:09:03.623 33840.407 - 34078.720: 100.0000% ( 5) 00:09:03.623 00:09:03.623 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:03.623 ============================================================================== 00:09:03.623 Range in us Cumulative IO count 00:09:03.623 6583.389 - 6613.178: 0.0084% ( 1) 00:09:03.623 6613.178 - 6642.967: 0.1260% ( 14) 00:09:03.623 6642.967 - 6672.756: 0.1848% ( 7) 00:09:03.623 6672.756 - 6702.545: 0.2016% ( 2) 00:09:03.623 6702.545 - 6732.335: 0.2184% ( 2) 00:09:03.623 6732.335 - 6762.124: 0.2268% ( 1) 00:09:03.623 6762.124 - 6791.913: 0.2436% ( 2) 00:09:03.623 6791.913 - 6821.702: 0.2520% ( 1) 00:09:03.623 6821.702 - 6851.491: 0.2772% ( 3) 00:09:03.623 6851.491 - 6881.280: 0.2856% ( 1) 00:09:03.623 6881.280 - 6911.069: 0.3024% ( 2) 00:09:03.623 6911.069 - 6940.858: 0.3192% ( 2) 00:09:03.623 6940.858 - 6970.647: 0.3360% ( 2) 00:09:03.623 6970.647 - 7000.436: 0.3528% ( 2) 00:09:03.623 7000.436 - 7030.225: 0.3612% ( 1) 00:09:03.623 7030.225 - 7060.015: 0.3780% ( 2) 00:09:03.623 7060.015 - 7089.804: 0.3948% ( 2) 00:09:03.623 7089.804 - 7119.593: 0.4200% ( 3) 00:09:03.623 7119.593 - 7149.382: 0.4368% ( 2) 00:09:03.623 7149.382 - 7179.171: 0.4536% ( 2) 00:09:03.623 7179.171 - 7208.960: 0.4704% ( 2) 00:09:03.623 7208.960 - 7238.749: 0.4956% ( 3) 00:09:03.623 7238.749 - 7268.538: 0.5124% ( 2) 00:09:03.623 7268.538 - 7298.327: 0.5292% ( 2) 00:09:03.623 7298.327 - 7328.116: 0.5376% ( 1) 00:09:03.623 9175.040 - 9234.618: 0.6216% ( 10) 00:09:03.623 9234.618 - 9294.196: 0.7644% ( 17) 00:09:03.623 9294.196 - 9353.775: 1.1005% ( 40) 00:09:03.623 9353.775 - 9413.353: 1.6969% ( 71) 00:09:03.623 9413.353 - 9472.931: 2.5370% ( 100) 00:09:03.623 9472.931 - 9532.509: 3.5618% ( 122) 00:09:03.623 9532.509 - 9592.087: 4.7379% ( 140) 00:09:03.623 9592.087 - 9651.665: 6.3172% ( 188) 00:09:03.623 9651.665 - 9711.244: 8.3921% ( 247) 00:09:03.623 9711.244 - 9770.822: 10.4755% ( 248) 00:09:03.623 9770.822 - 9830.400: 12.9032% ( 289) 00:09:03.623 9830.400 - 9889.978: 15.8602% ( 352) 00:09:03.623 9889.978 - 9949.556: 18.9852% ( 372) 00:09:03.623 9949.556 - 10009.135: 22.7739% ( 451) 00:09:03.623 10009.135 - 10068.713: 26.6801% ( 465) 00:09:03.623 10068.713 - 10128.291: 31.2920% ( 549) 00:09:03.623 10128.291 - 10187.869: 35.8199% ( 539) 00:09:03.623 10187.869 - 10247.447: 40.0202% ( 500) 00:09:03.623 10247.447 - 10307.025: 44.5060% ( 534) 00:09:03.624 10307.025 - 10366.604: 48.6559% ( 494) 00:09:03.624 10366.604 - 10426.182: 52.6462% ( 475) 00:09:03.624 10426.182 - 10485.760: 56.4432% ( 452) 00:09:03.624 10485.760 - 10545.338: 60.1815% ( 445) 00:09:03.624 10545.338 - 10604.916: 63.5585% ( 402) 00:09:03.624 10604.916 - 10664.495: 66.7171% ( 376) 00:09:03.624 10664.495 - 10724.073: 69.6657% ( 351) 00:09:03.624 10724.073 - 10783.651: 72.3034% ( 314) 00:09:03.624 10783.651 - 10843.229: 74.7732% ( 294) 00:09:03.624 10843.229 - 10902.807: 77.1253% ( 280) 00:09:03.624 10902.807 - 10962.385: 79.0155% ( 225) 00:09:03.624 10962.385 - 11021.964: 80.5528% ( 183) 00:09:03.624 11021.964 - 11081.542: 82.0144% ( 174) 00:09:03.624 11081.542 - 11141.120: 83.1989% ( 141) 00:09:03.624 11141.120 - 11200.698: 84.2322% ( 123) 00:09:03.624 11200.698 - 11260.276: 85.1647% ( 111) 00:09:03.624 11260.276 - 11319.855: 85.9375% ( 92) 00:09:03.624 11319.855 - 11379.433: 86.6095% ( 80) 00:09:03.624 11379.433 - 11439.011: 87.2312% ( 74) 00:09:03.624 11439.011 - 11498.589: 87.7604% ( 63) 00:09:03.624 11498.589 - 11558.167: 88.1804% ( 50) 00:09:03.624 11558.167 - 11617.745: 88.6929% ( 61) 00:09:03.624 11617.745 - 11677.324: 89.1465% ( 54) 00:09:03.624 11677.324 - 11736.902: 89.6001% ( 54) 00:09:03.624 11736.902 - 11796.480: 90.0118% ( 49) 00:09:03.624 11796.480 - 11856.058: 90.3730% ( 43) 00:09:03.624 11856.058 - 11915.636: 90.7930% ( 50) 00:09:03.624 11915.636 - 11975.215: 91.1542% ( 43) 00:09:03.624 11975.215 - 12034.793: 91.5407% ( 46) 00:09:03.624 12034.793 - 12094.371: 91.9607% ( 50) 00:09:03.624 12094.371 - 12153.949: 92.3723% ( 49) 00:09:03.624 12153.949 - 12213.527: 92.8175% ( 53) 00:09:03.624 12213.527 - 12273.105: 93.2880% ( 56) 00:09:03.624 12273.105 - 12332.684: 93.7332% ( 53) 00:09:03.624 12332.684 - 12392.262: 94.1532% ( 50) 00:09:03.624 12392.262 - 12451.840: 94.4724% ( 38) 00:09:03.624 12451.840 - 12511.418: 94.7581% ( 34) 00:09:03.624 12511.418 - 12570.996: 95.0017% ( 29) 00:09:03.624 12570.996 - 12630.575: 95.2369% ( 28) 00:09:03.624 12630.575 - 12690.153: 95.4217% ( 22) 00:09:03.624 12690.153 - 12749.731: 95.6233% ( 24) 00:09:03.624 12749.731 - 12809.309: 95.8417% ( 26) 00:09:03.624 12809.309 - 12868.887: 96.0938% ( 30) 00:09:03.624 12868.887 - 12928.465: 96.3374% ( 29) 00:09:03.624 12928.465 - 12988.044: 96.6398% ( 36) 00:09:03.624 12988.044 - 13047.622: 96.8414% ( 24) 00:09:03.624 13047.622 - 13107.200: 96.9758% ( 16) 00:09:03.624 13107.200 - 13166.778: 97.1354% ( 19) 00:09:03.624 13166.778 - 13226.356: 97.2698% ( 16) 00:09:03.624 13226.356 - 13285.935: 97.3622% ( 11) 00:09:03.624 13285.935 - 13345.513: 97.4546% ( 11) 00:09:03.624 13345.513 - 13405.091: 97.5554% ( 12) 00:09:03.624 13405.091 - 13464.669: 97.6562% ( 12) 00:09:03.624 13464.669 - 13524.247: 97.8075% ( 18) 00:09:03.624 13524.247 - 13583.825: 97.9419% ( 16) 00:09:03.624 13583.825 - 13643.404: 98.0091% ( 8) 00:09:03.624 13643.404 - 13702.982: 98.1015% ( 11) 00:09:03.624 13702.982 - 13762.560: 98.1855% ( 10) 00:09:03.624 13762.560 - 13822.138: 98.2779% ( 11) 00:09:03.624 13822.138 - 13881.716: 98.3871% ( 13) 00:09:03.624 13881.716 - 13941.295: 98.4543% ( 8) 00:09:03.624 13941.295 - 14000.873: 98.5383% ( 10) 00:09:03.624 14000.873 - 14060.451: 98.5971% ( 7) 00:09:03.624 14060.451 - 14120.029: 98.6559% ( 7) 00:09:03.624 14120.029 - 14179.607: 98.7063% ( 6) 00:09:03.624 14179.607 - 14239.185: 98.7567% ( 6) 00:09:03.624 14239.185 - 14298.764: 98.7987% ( 5) 00:09:03.624 14298.764 - 14358.342: 98.8239% ( 3) 00:09:03.624 14358.342 - 14417.920: 98.8491% ( 3) 00:09:03.624 14417.920 - 14477.498: 98.8743% ( 3) 00:09:03.624 14477.498 - 14537.076: 98.8995% ( 3) 00:09:03.624 14537.076 - 14596.655: 98.9247% ( 3) 00:09:03.624 25141.993 - 25261.149: 98.9583% ( 4) 00:09:03.624 25261.149 - 25380.305: 99.0087% ( 6) 00:09:03.624 25380.305 - 25499.462: 99.0591% ( 6) 00:09:03.624 25499.462 - 25618.618: 99.1095% ( 6) 00:09:03.624 25618.618 - 25737.775: 99.1515% ( 5) 00:09:03.624 25737.775 - 25856.931: 99.1935% ( 5) 00:09:03.624 25856.931 - 25976.087: 99.2440% ( 6) 00:09:03.624 25976.087 - 26095.244: 99.2860% ( 5) 00:09:03.624 26095.244 - 26214.400: 99.3448% ( 7) 00:09:03.624 26214.400 - 26333.556: 99.3952% ( 6) 00:09:03.624 26333.556 - 26452.713: 99.4540% ( 7) 00:09:03.624 26452.713 - 26571.869: 99.4624% ( 1) 00:09:03.624 31695.593 - 31933.905: 99.5632% ( 12) 00:09:03.624 31933.905 - 32172.218: 99.6640% ( 12) 00:09:03.624 32172.218 - 32410.531: 99.7480% ( 10) 00:09:03.624 32410.531 - 32648.844: 99.8320% ( 10) 00:09:03.624 32648.844 - 32887.156: 99.9412% ( 13) 00:09:03.624 32887.156 - 33125.469: 100.0000% ( 7) 00:09:03.624 00:09:03.624 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:03.624 ============================================================================== 00:09:03.624 Range in us Cumulative IO count 00:09:03.624 5451.404 - 5481.193: 0.0252% ( 3) 00:09:03.624 5481.193 - 5510.982: 0.1344% ( 13) 00:09:03.624 5510.982 - 5540.771: 0.1680% ( 4) 00:09:03.624 5540.771 - 5570.560: 0.1764% ( 1) 00:09:03.624 5570.560 - 5600.349: 0.1932% ( 2) 00:09:03.624 5600.349 - 5630.138: 0.2100% ( 2) 00:09:03.624 5630.138 - 5659.927: 0.2268% ( 2) 00:09:03.624 5659.927 - 5689.716: 0.2436% ( 2) 00:09:03.624 5689.716 - 5719.505: 0.2604% ( 2) 00:09:03.624 5719.505 - 5749.295: 0.2688% ( 1) 00:09:03.624 5749.295 - 5779.084: 0.2856% ( 2) 00:09:03.624 5779.084 - 5808.873: 0.3024% ( 2) 00:09:03.624 5808.873 - 5838.662: 0.3108% ( 1) 00:09:03.624 5838.662 - 5868.451: 0.3276% ( 2) 00:09:03.624 5868.451 - 5898.240: 0.3444% ( 2) 00:09:03.624 5898.240 - 5928.029: 0.3612% ( 2) 00:09:03.624 5928.029 - 5957.818: 0.3864% ( 3) 00:09:03.624 5957.818 - 5987.607: 0.3948% ( 1) 00:09:03.624 5987.607 - 6017.396: 0.4116% ( 2) 00:09:03.624 6017.396 - 6047.185: 0.4284% ( 2) 00:09:03.624 6047.185 - 6076.975: 0.4452% ( 2) 00:09:03.624 6076.975 - 6106.764: 0.4620% ( 2) 00:09:03.624 6106.764 - 6136.553: 0.4788% ( 2) 00:09:03.624 6136.553 - 6166.342: 0.4956% ( 2) 00:09:03.624 6166.342 - 6196.131: 0.5124% ( 2) 00:09:03.624 6196.131 - 6225.920: 0.5292% ( 2) 00:09:03.624 6225.920 - 6255.709: 0.5376% ( 1) 00:09:03.624 8579.258 - 8638.836: 0.6300% ( 11) 00:09:03.624 8638.836 - 8698.415: 0.7560% ( 15) 00:09:03.624 8698.415 - 8757.993: 0.7812% ( 3) 00:09:03.624 8757.993 - 8817.571: 0.8149% ( 4) 00:09:03.624 8817.571 - 8877.149: 0.8485% ( 4) 00:09:03.624 8877.149 - 8936.727: 0.8821% ( 4) 00:09:03.624 8936.727 - 8996.305: 0.9157% ( 4) 00:09:03.624 8996.305 - 9055.884: 0.9409% ( 3) 00:09:03.624 9055.884 - 9115.462: 0.9745% ( 4) 00:09:03.624 9115.462 - 9175.040: 0.9997% ( 3) 00:09:03.624 9175.040 - 9234.618: 1.0585% ( 7) 00:09:03.624 9234.618 - 9294.196: 1.3021% ( 29) 00:09:03.624 9294.196 - 9353.775: 1.5541% ( 30) 00:09:03.624 9353.775 - 9413.353: 1.9489% ( 47) 00:09:03.624 9413.353 - 9472.931: 2.6714% ( 86) 00:09:03.624 9472.931 - 9532.509: 3.5702% ( 107) 00:09:03.624 9532.509 - 9592.087: 4.7799% ( 144) 00:09:03.624 9592.087 - 9651.665: 6.4600% ( 200) 00:09:03.624 9651.665 - 9711.244: 8.5769% ( 252) 00:09:03.624 9711.244 - 9770.822: 11.1223% ( 303) 00:09:03.624 9770.822 - 9830.400: 14.0037% ( 343) 00:09:03.624 9830.400 - 9889.978: 17.3135% ( 394) 00:09:03.624 9889.978 - 9949.556: 20.5057% ( 380) 00:09:03.624 9949.556 - 10009.135: 23.8659% ( 400) 00:09:03.624 10009.135 - 10068.713: 27.4950% ( 432) 00:09:03.624 10068.713 - 10128.291: 31.3004% ( 453) 00:09:03.624 10128.291 - 10187.869: 35.4503% ( 494) 00:09:03.624 10187.869 - 10247.447: 39.7681% ( 514) 00:09:03.624 10247.447 - 10307.025: 43.8928% ( 491) 00:09:03.624 10307.025 - 10366.604: 47.8075% ( 466) 00:09:03.624 10366.604 - 10426.182: 51.7137% ( 465) 00:09:03.624 10426.182 - 10485.760: 55.7376% ( 479) 00:09:03.624 10485.760 - 10545.338: 59.5850% ( 458) 00:09:03.624 10545.338 - 10604.916: 63.2644% ( 438) 00:09:03.624 10604.916 - 10664.495: 66.4987% ( 385) 00:09:03.624 10664.495 - 10724.073: 69.5397% ( 362) 00:09:03.624 10724.073 - 10783.651: 72.3370% ( 333) 00:09:03.624 10783.651 - 10843.229: 74.8320% ( 297) 00:09:03.624 10843.229 - 10902.807: 77.0413% ( 263) 00:09:03.624 10902.807 - 10962.385: 78.8558% ( 216) 00:09:03.624 10962.385 - 11021.964: 80.3175% ( 174) 00:09:03.624 11021.964 - 11081.542: 81.6196% ( 155) 00:09:03.624 11081.542 - 11141.120: 82.7957% ( 140) 00:09:03.624 11141.120 - 11200.698: 83.8626% ( 127) 00:09:03.624 11200.698 - 11260.276: 84.9126% ( 125) 00:09:03.624 11260.276 - 11319.855: 85.8115% ( 107) 00:09:03.624 11319.855 - 11379.433: 86.5675% ( 90) 00:09:03.624 11379.433 - 11439.011: 87.2984% ( 87) 00:09:03.624 11439.011 - 11498.589: 87.9368% ( 76) 00:09:03.624 11498.589 - 11558.167: 88.4997% ( 67) 00:09:03.624 11558.167 - 11617.745: 88.9029% ( 48) 00:09:03.624 11617.745 - 11677.324: 89.3061% ( 48) 00:09:03.624 11677.324 - 11736.902: 89.6925% ( 46) 00:09:03.624 11736.902 - 11796.480: 90.0622% ( 44) 00:09:03.624 11796.480 - 11856.058: 90.4654% ( 48) 00:09:03.624 11856.058 - 11915.636: 90.9022% ( 52) 00:09:03.624 11915.636 - 11975.215: 91.3054% ( 48) 00:09:03.624 11975.215 - 12034.793: 91.7171% ( 49) 00:09:03.624 12034.793 - 12094.371: 92.1875% ( 56) 00:09:03.624 12094.371 - 12153.949: 92.6579% ( 56) 00:09:03.624 12153.949 - 12213.527: 93.1452% ( 58) 00:09:03.624 12213.527 - 12273.105: 93.6072% ( 55) 00:09:03.624 12273.105 - 12332.684: 93.9348% ( 39) 00:09:03.624 12332.684 - 12392.262: 94.2288% ( 35) 00:09:03.624 12392.262 - 12451.840: 94.4976% ( 32) 00:09:03.624 12451.840 - 12511.418: 94.7665% ( 32) 00:09:03.624 12511.418 - 12570.996: 95.0269% ( 31) 00:09:03.624 12570.996 - 12630.575: 95.3041% ( 33) 00:09:03.624 12630.575 - 12690.153: 95.5309% ( 27) 00:09:03.624 12690.153 - 12749.731: 95.7829% ( 30) 00:09:03.624 12749.731 - 12809.309: 96.0181% ( 28) 00:09:03.624 12809.309 - 12868.887: 96.2114% ( 23) 00:09:03.624 12868.887 - 12928.465: 96.4382% ( 27) 00:09:03.624 12928.465 - 12988.044: 96.6734% ( 28) 00:09:03.624 12988.044 - 13047.622: 96.9254% ( 30) 00:09:03.624 13047.622 - 13107.200: 97.1438% ( 26) 00:09:03.624 13107.200 - 13166.778: 97.3622% ( 26) 00:09:03.625 13166.778 - 13226.356: 97.5050% ( 17) 00:09:03.625 13226.356 - 13285.935: 97.6058% ( 12) 00:09:03.625 13285.935 - 13345.513: 97.7067% ( 12) 00:09:03.625 13345.513 - 13405.091: 97.7907% ( 10) 00:09:03.625 13405.091 - 13464.669: 97.8999% ( 13) 00:09:03.625 13464.669 - 13524.247: 98.0091% ( 13) 00:09:03.625 13524.247 - 13583.825: 98.1183% ( 13) 00:09:03.625 13583.825 - 13643.404: 98.2107% ( 11) 00:09:03.625 13643.404 - 13702.982: 98.3031% ( 11) 00:09:03.625 13702.982 - 13762.560: 98.3619% ( 7) 00:09:03.625 13762.560 - 13822.138: 98.4711% ( 13) 00:09:03.625 13822.138 - 13881.716: 98.5383% ( 8) 00:09:03.625 13881.716 - 13941.295: 98.5971% ( 7) 00:09:03.625 13941.295 - 14000.873: 98.6811% ( 10) 00:09:03.625 14000.873 - 14060.451: 98.7315% ( 6) 00:09:03.625 14060.451 - 14120.029: 98.7651% ( 4) 00:09:03.625 14120.029 - 14179.607: 98.7903% ( 3) 00:09:03.625 14179.607 - 14239.185: 98.8155% ( 3) 00:09:03.625 14239.185 - 14298.764: 98.8491% ( 4) 00:09:03.885 14298.764 - 14358.342: 98.8827% ( 4) 00:09:03.885 14358.342 - 14417.920: 98.9079% ( 3) 00:09:03.885 14417.920 - 14477.498: 98.9247% ( 2) 00:09:03.885 25261.149 - 25380.305: 98.9751% ( 6) 00:09:03.885 25380.305 - 25499.462: 99.0339% ( 7) 00:09:03.885 25499.462 - 25618.618: 99.0927% ( 7) 00:09:03.885 25618.618 - 25737.775: 99.1347% ( 5) 00:09:03.885 25737.775 - 25856.931: 99.1767% ( 5) 00:09:03.885 25856.931 - 25976.087: 99.2272% ( 6) 00:09:03.885 25976.087 - 26095.244: 99.2776% ( 6) 00:09:03.885 26095.244 - 26214.400: 99.3364% ( 7) 00:09:03.885 26214.400 - 26333.556: 99.3868% ( 6) 00:09:03.885 26333.556 - 26452.713: 99.4288% ( 5) 00:09:03.885 26452.713 - 26571.869: 99.4624% ( 4) 00:09:03.885 31457.280 - 31695.593: 99.4960% ( 4) 00:09:03.885 31695.593 - 31933.905: 99.5884% ( 11) 00:09:03.885 31933.905 - 32172.218: 99.6724% ( 10) 00:09:03.885 32172.218 - 32410.531: 99.7732% ( 12) 00:09:03.885 32410.531 - 32648.844: 99.8656% ( 11) 00:09:03.885 32648.844 - 32887.156: 99.9496% ( 10) 00:09:03.885 32887.156 - 33125.469: 100.0000% ( 6) 00:09:03.885 00:09:03.885 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:03.885 ============================================================================== 00:09:03.885 Range in us Cumulative IO count 00:09:03.885 4974.778 - 5004.567: 0.0084% ( 1) 00:09:03.885 5064.145 - 5093.935: 0.0588% ( 6) 00:09:03.885 5093.935 - 5123.724: 0.1848% ( 15) 00:09:03.885 5123.724 - 5153.513: 0.2016% ( 2) 00:09:03.885 5153.513 - 5183.302: 0.2184% ( 2) 00:09:03.885 5183.302 - 5213.091: 0.2352% ( 2) 00:09:03.885 5213.091 - 5242.880: 0.2520% ( 2) 00:09:03.885 5242.880 - 5272.669: 0.2688% ( 2) 00:09:03.885 5272.669 - 5302.458: 0.2772% ( 1) 00:09:03.885 5302.458 - 5332.247: 0.2940% ( 2) 00:09:03.885 5332.247 - 5362.036: 0.3108% ( 2) 00:09:03.885 5362.036 - 5391.825: 0.3276% ( 2) 00:09:03.885 5391.825 - 5421.615: 0.3444% ( 2) 00:09:03.885 5421.615 - 5451.404: 0.3612% ( 2) 00:09:03.885 5451.404 - 5481.193: 0.3696% ( 1) 00:09:03.885 5481.193 - 5510.982: 0.3864% ( 2) 00:09:03.885 5510.982 - 5540.771: 0.4032% ( 2) 00:09:03.885 5540.771 - 5570.560: 0.4200% ( 2) 00:09:03.885 5570.560 - 5600.349: 0.4452% ( 3) 00:09:03.885 5600.349 - 5630.138: 0.4536% ( 1) 00:09:03.885 5630.138 - 5659.927: 0.4704% ( 2) 00:09:03.885 5659.927 - 5689.716: 0.4872% ( 2) 00:09:03.885 5689.716 - 5719.505: 0.5040% ( 2) 00:09:03.885 5719.505 - 5749.295: 0.5292% ( 3) 00:09:03.886 5749.295 - 5779.084: 0.5376% ( 1) 00:09:03.886 8102.633 - 8162.211: 0.5712% ( 4) 00:09:03.886 8162.211 - 8221.789: 0.6972% ( 15) 00:09:03.886 8221.789 - 8281.367: 0.7812% ( 10) 00:09:03.886 8281.367 - 8340.945: 0.8149% ( 4) 00:09:03.886 8340.945 - 8400.524: 0.8401% ( 3) 00:09:03.886 8400.524 - 8460.102: 0.8653% ( 3) 00:09:03.886 8460.102 - 8519.680: 0.9073% ( 5) 00:09:03.886 8519.680 - 8579.258: 0.9325% ( 3) 00:09:03.886 8579.258 - 8638.836: 0.9661% ( 4) 00:09:03.886 8638.836 - 8698.415: 0.9913% ( 3) 00:09:03.886 8698.415 - 8757.993: 1.0165% ( 3) 00:09:03.886 8757.993 - 8817.571: 1.0501% ( 4) 00:09:03.886 8817.571 - 8877.149: 1.0753% ( 3) 00:09:03.886 9175.040 - 9234.618: 1.0921% ( 2) 00:09:03.886 9234.618 - 9294.196: 1.1845% ( 11) 00:09:03.886 9294.196 - 9353.775: 1.5289% ( 41) 00:09:03.886 9353.775 - 9413.353: 2.1001% ( 68) 00:09:03.886 9413.353 - 9472.931: 2.9318% ( 99) 00:09:03.886 9472.931 - 9532.509: 3.8894% ( 114) 00:09:03.886 9532.509 - 9592.087: 5.2587% ( 163) 00:09:03.886 9592.087 - 9651.665: 6.7876% ( 182) 00:09:03.886 9651.665 - 9711.244: 8.8206% ( 242) 00:09:03.886 9711.244 - 9770.822: 11.2399% ( 288) 00:09:03.886 9770.822 - 9830.400: 13.7601% ( 300) 00:09:03.886 9830.400 - 9889.978: 16.7927% ( 361) 00:09:03.886 9889.978 - 9949.556: 20.2201% ( 408) 00:09:03.886 9949.556 - 10009.135: 23.9415% ( 443) 00:09:03.886 10009.135 - 10068.713: 27.6210% ( 438) 00:09:03.886 10068.713 - 10128.291: 31.5692% ( 470) 00:09:03.886 10128.291 - 10187.869: 35.4503% ( 462) 00:09:03.886 10187.869 - 10247.447: 39.5413% ( 487) 00:09:03.886 10247.447 - 10307.025: 43.7836% ( 505) 00:09:03.886 10307.025 - 10366.604: 47.9839% ( 500) 00:09:03.886 10366.604 - 10426.182: 51.9741% ( 475) 00:09:03.886 10426.182 - 10485.760: 55.9644% ( 475) 00:09:03.886 10485.760 - 10545.338: 59.6774% ( 442) 00:09:03.886 10545.338 - 10604.916: 63.1300% ( 411) 00:09:03.886 10604.916 - 10664.495: 66.4987% ( 401) 00:09:03.886 10664.495 - 10724.073: 69.5397% ( 362) 00:09:03.886 10724.073 - 10783.651: 72.2866% ( 327) 00:09:03.886 10783.651 - 10843.229: 74.8404% ( 304) 00:09:03.886 10843.229 - 10902.807: 77.1337% ( 273) 00:09:03.886 10902.807 - 10962.385: 79.0491% ( 228) 00:09:03.886 10962.385 - 11021.964: 80.7124% ( 198) 00:09:03.886 11021.964 - 11081.542: 82.1657% ( 173) 00:09:03.886 11081.542 - 11141.120: 83.3669% ( 143) 00:09:03.886 11141.120 - 11200.698: 84.2910% ( 110) 00:09:03.886 11200.698 - 11260.276: 85.2487% ( 114) 00:09:03.886 11260.276 - 11319.855: 85.9795% ( 87) 00:09:03.886 11319.855 - 11379.433: 86.5591% ( 69) 00:09:03.886 11379.433 - 11439.011: 87.0968% ( 64) 00:09:03.886 11439.011 - 11498.589: 87.7352% ( 76) 00:09:03.886 11498.589 - 11558.167: 88.2224% ( 58) 00:09:03.886 11558.167 - 11617.745: 88.7769% ( 66) 00:09:03.886 11617.745 - 11677.324: 89.2641% ( 58) 00:09:03.886 11677.324 - 11736.902: 89.6925% ( 51) 00:09:03.886 11736.902 - 11796.480: 90.0370% ( 41) 00:09:03.886 11796.480 - 11856.058: 90.3646% ( 39) 00:09:03.886 11856.058 - 11915.636: 90.7006% ( 40) 00:09:03.886 11915.636 - 11975.215: 91.0198% ( 38) 00:09:03.886 11975.215 - 12034.793: 91.3138% ( 35) 00:09:03.886 12034.793 - 12094.371: 91.6583% ( 41) 00:09:03.886 12094.371 - 12153.949: 92.0867% ( 51) 00:09:03.886 12153.949 - 12213.527: 92.6495% ( 67) 00:09:03.886 12213.527 - 12273.105: 93.0864% ( 52) 00:09:03.886 12273.105 - 12332.684: 93.4896% ( 48) 00:09:03.886 12332.684 - 12392.262: 93.8424% ( 42) 00:09:03.886 12392.262 - 12451.840: 94.2204% ( 45) 00:09:03.886 12451.840 - 12511.418: 94.4976% ( 33) 00:09:03.886 12511.418 - 12570.996: 94.7749% ( 33) 00:09:03.886 12570.996 - 12630.575: 95.0521% ( 33) 00:09:03.886 12630.575 - 12690.153: 95.3545% ( 36) 00:09:03.886 12690.153 - 12749.731: 95.6905% ( 40) 00:09:03.886 12749.731 - 12809.309: 95.9341% ( 29) 00:09:03.886 12809.309 - 12868.887: 96.1694% ( 28) 00:09:03.886 12868.887 - 12928.465: 96.3794% ( 25) 00:09:03.886 12928.465 - 12988.044: 96.6230% ( 29) 00:09:03.886 12988.044 - 13047.622: 96.9002% ( 33) 00:09:03.886 13047.622 - 13107.200: 97.1522% ( 30) 00:09:03.886 13107.200 - 13166.778: 97.3538% ( 24) 00:09:03.886 13166.778 - 13226.356: 97.5638% ( 25) 00:09:03.886 13226.356 - 13285.935: 97.7487% ( 22) 00:09:03.886 13285.935 - 13345.513: 97.8915% ( 17) 00:09:03.886 13345.513 - 13405.091: 98.0343% ( 17) 00:09:03.886 13405.091 - 13464.669: 98.1519% ( 14) 00:09:03.886 13464.669 - 13524.247: 98.2359% ( 10) 00:09:03.886 13524.247 - 13583.825: 98.3199% ( 10) 00:09:03.886 13583.825 - 13643.404: 98.3955% ( 9) 00:09:03.886 13643.404 - 13702.982: 98.4627% ( 8) 00:09:03.886 13702.982 - 13762.560: 98.5215% ( 7) 00:09:03.886 13762.560 - 13822.138: 98.5887% ( 8) 00:09:03.886 13822.138 - 13881.716: 98.6475% ( 7) 00:09:03.886 13881.716 - 13941.295: 98.7231% ( 9) 00:09:03.886 13941.295 - 14000.873: 98.7819% ( 7) 00:09:03.886 14000.873 - 14060.451: 98.8323% ( 6) 00:09:03.886 14060.451 - 14120.029: 98.8827% ( 6) 00:09:03.886 14120.029 - 14179.607: 98.9079% ( 3) 00:09:03.886 14179.607 - 14239.185: 98.9247% ( 2) 00:09:03.886 24307.898 - 24427.055: 98.9583% ( 4) 00:09:03.886 24427.055 - 24546.211: 98.9919% ( 4) 00:09:03.886 24546.211 - 24665.367: 99.0423% ( 6) 00:09:03.886 24665.367 - 24784.524: 99.0843% ( 5) 00:09:03.886 24784.524 - 24903.680: 99.1179% ( 4) 00:09:03.886 24903.680 - 25022.836: 99.1599% ( 5) 00:09:03.886 25022.836 - 25141.993: 99.2019% ( 5) 00:09:03.886 25141.993 - 25261.149: 99.2440% ( 5) 00:09:03.886 25261.149 - 25380.305: 99.2944% ( 6) 00:09:03.886 25380.305 - 25499.462: 99.3448% ( 6) 00:09:03.886 25499.462 - 25618.618: 99.4036% ( 7) 00:09:03.886 25618.618 - 25737.775: 99.4456% ( 5) 00:09:03.886 25737.775 - 25856.931: 99.4624% ( 2) 00:09:03.886 30742.342 - 30980.655: 99.4792% ( 2) 00:09:03.886 30980.655 - 31218.967: 99.5884% ( 13) 00:09:03.886 31218.967 - 31457.280: 99.6976% ( 13) 00:09:03.886 31457.280 - 31695.593: 99.7900% ( 11) 00:09:03.886 31695.593 - 31933.905: 99.8824% ( 11) 00:09:03.886 31933.905 - 32172.218: 99.9748% ( 11) 00:09:03.886 32172.218 - 32410.531: 100.0000% ( 3) 00:09:03.886 00:09:03.886 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:03.886 ============================================================================== 00:09:03.886 Range in us Cumulative IO count 00:09:03.886 4527.942 - 4557.731: 0.0167% ( 2) 00:09:03.886 4557.731 - 4587.520: 0.0668% ( 6) 00:09:03.886 4587.520 - 4617.309: 0.1504% ( 10) 00:09:03.886 4617.309 - 4647.098: 0.1755% ( 3) 00:09:03.886 4647.098 - 4676.887: 0.1922% ( 2) 00:09:03.886 4676.887 - 4706.676: 0.2005% ( 1) 00:09:03.886 4706.676 - 4736.465: 0.2172% ( 2) 00:09:03.886 4736.465 - 4766.255: 0.2340% ( 2) 00:09:03.886 4766.255 - 4796.044: 0.2507% ( 2) 00:09:03.886 4796.044 - 4825.833: 0.2674% ( 2) 00:09:03.886 4825.833 - 4855.622: 0.2841% ( 2) 00:09:03.886 4855.622 - 4885.411: 0.2924% ( 1) 00:09:03.886 4885.411 - 4915.200: 0.3092% ( 2) 00:09:03.886 4915.200 - 4944.989: 0.3175% ( 1) 00:09:03.886 4944.989 - 4974.778: 0.3342% ( 2) 00:09:03.886 4974.778 - 5004.567: 0.3509% ( 2) 00:09:03.886 5004.567 - 5034.356: 0.3760% ( 3) 00:09:03.886 5034.356 - 5064.145: 0.3927% ( 2) 00:09:03.886 5064.145 - 5093.935: 0.4094% ( 2) 00:09:03.886 5093.935 - 5123.724: 0.4345% ( 3) 00:09:03.886 5123.724 - 5153.513: 0.4512% ( 2) 00:09:03.886 5153.513 - 5183.302: 0.4596% ( 1) 00:09:03.886 5183.302 - 5213.091: 0.4763% ( 2) 00:09:03.886 5213.091 - 5242.880: 0.4930% ( 2) 00:09:03.886 5242.880 - 5272.669: 0.5097% ( 2) 00:09:03.886 5272.669 - 5302.458: 0.5264% ( 2) 00:09:03.886 5302.458 - 5332.247: 0.5348% ( 1) 00:09:03.886 7685.585 - 7745.164: 0.5849% ( 6) 00:09:03.886 7745.164 - 7804.742: 0.7102% ( 15) 00:09:03.886 7804.742 - 7864.320: 0.7353% ( 3) 00:09:03.886 7864.320 - 7923.898: 0.7687% ( 4) 00:09:03.886 7923.898 - 7983.476: 0.7938% ( 3) 00:09:03.886 7983.476 - 8043.055: 0.8272% ( 4) 00:09:03.886 8043.055 - 8102.633: 0.8606% ( 4) 00:09:03.886 8102.633 - 8162.211: 0.8857% ( 3) 00:09:03.886 8162.211 - 8221.789: 0.9275% ( 5) 00:09:03.886 8221.789 - 8281.367: 0.9525% ( 3) 00:09:03.886 8281.367 - 8340.945: 0.9860% ( 4) 00:09:03.886 8340.945 - 8400.524: 1.0194% ( 4) 00:09:03.886 8400.524 - 8460.102: 1.0445% ( 3) 00:09:03.886 8460.102 - 8519.680: 1.0695% ( 3) 00:09:03.886 9115.462 - 9175.040: 1.0862% ( 2) 00:09:03.886 9175.040 - 9234.618: 1.1364% ( 6) 00:09:03.886 9234.618 - 9294.196: 1.3870% ( 30) 00:09:03.886 9294.196 - 9353.775: 1.9301% ( 65) 00:09:03.886 9353.775 - 9413.353: 2.4566% ( 63) 00:09:03.886 9413.353 - 9472.931: 3.0665% ( 73) 00:09:03.886 9472.931 - 9532.509: 3.8937% ( 99) 00:09:03.886 9532.509 - 9592.087: 4.9799% ( 130) 00:09:03.886 9592.087 - 9651.665: 6.3670% ( 166) 00:09:03.886 9651.665 - 9711.244: 8.6397% ( 272) 00:09:03.886 9711.244 - 9770.822: 11.2884% ( 317) 00:09:03.886 9770.822 - 9830.400: 14.2129% ( 350) 00:09:03.886 9830.400 - 9889.978: 17.2627% ( 365) 00:09:03.886 9889.978 - 9949.556: 20.4128% ( 377) 00:09:03.886 9949.556 - 10009.135: 23.6798% ( 391) 00:09:03.886 10009.135 - 10068.713: 27.4148% ( 447) 00:09:03.886 10068.713 - 10128.291: 31.1414% ( 446) 00:09:03.886 10128.291 - 10187.869: 35.0267% ( 465) 00:09:03.886 10187.869 - 10247.447: 39.5137% ( 537) 00:09:03.886 10247.447 - 10307.025: 43.6664% ( 497) 00:09:03.886 10307.025 - 10366.604: 48.0448% ( 524) 00:09:03.886 10366.604 - 10426.182: 52.2393% ( 502) 00:09:03.886 10426.182 - 10485.760: 56.1664% ( 470) 00:09:03.886 10485.760 - 10545.338: 59.8680% ( 443) 00:09:03.886 10545.338 - 10604.916: 63.4776% ( 432) 00:09:03.886 10604.916 - 10664.495: 66.5441% ( 367) 00:09:03.886 10664.495 - 10724.073: 69.4184% ( 344) 00:09:03.886 10724.073 - 10783.651: 71.9669% ( 305) 00:09:03.886 10783.651 - 10843.229: 74.2229% ( 270) 00:09:03.886 10843.229 - 10902.807: 76.2450% ( 242) 00:09:03.886 10902.807 - 10962.385: 78.1918% ( 233) 00:09:03.886 10962.385 - 11021.964: 80.0468% ( 222) 00:09:03.886 11021.964 - 11081.542: 81.5592% ( 181) 00:09:03.887 11081.542 - 11141.120: 83.0715% ( 181) 00:09:03.887 11141.120 - 11200.698: 84.1995% ( 135) 00:09:03.887 11200.698 - 11260.276: 85.1771% ( 117) 00:09:03.887 11260.276 - 11319.855: 85.9459% ( 92) 00:09:03.887 11319.855 - 11379.433: 86.5809% ( 76) 00:09:03.887 11379.433 - 11439.011: 87.2243% ( 77) 00:09:03.887 11439.011 - 11498.589: 87.8008% ( 69) 00:09:03.887 11498.589 - 11558.167: 88.3857% ( 70) 00:09:03.887 11558.167 - 11617.745: 88.8954% ( 61) 00:09:03.887 11617.745 - 11677.324: 89.3800% ( 58) 00:09:03.887 11677.324 - 11736.902: 89.7309% ( 42) 00:09:03.887 11736.902 - 11796.480: 90.0735% ( 41) 00:09:03.887 11796.480 - 11856.058: 90.4830% ( 49) 00:09:03.887 11856.058 - 11915.636: 90.9509% ( 56) 00:09:03.887 11915.636 - 11975.215: 91.3854% ( 52) 00:09:03.887 11975.215 - 12034.793: 91.8031% ( 50) 00:09:03.887 12034.793 - 12094.371: 92.0705% ( 32) 00:09:03.887 12094.371 - 12153.949: 92.3463% ( 33) 00:09:03.887 12153.949 - 12213.527: 92.6053% ( 31) 00:09:03.887 12213.527 - 12273.105: 92.9061% ( 36) 00:09:03.887 12273.105 - 12332.684: 93.2403% ( 40) 00:09:03.887 12332.684 - 12392.262: 93.6581% ( 50) 00:09:03.887 12392.262 - 12451.840: 93.9505% ( 35) 00:09:03.887 12451.840 - 12511.418: 94.3098% ( 43) 00:09:03.887 12511.418 - 12570.996: 94.6441% ( 40) 00:09:03.887 12570.996 - 12630.575: 95.0201% ( 45) 00:09:03.887 12630.575 - 12690.153: 95.2958% ( 33) 00:09:03.887 12690.153 - 12749.731: 95.6049% ( 37) 00:09:03.887 12749.731 - 12809.309: 95.8473% ( 29) 00:09:03.887 12809.309 - 12868.887: 96.0729% ( 27) 00:09:03.887 12868.887 - 12928.465: 96.3152% ( 29) 00:09:03.887 12928.465 - 12988.044: 96.5324% ( 26) 00:09:03.887 12988.044 - 13047.622: 96.7914% ( 31) 00:09:03.887 13047.622 - 13107.200: 97.0170% ( 27) 00:09:03.887 13107.200 - 13166.778: 97.2092% ( 23) 00:09:03.887 13166.778 - 13226.356: 97.3596% ( 18) 00:09:03.887 13226.356 - 13285.935: 97.5184% ( 19) 00:09:03.887 13285.935 - 13345.513: 97.6771% ( 19) 00:09:03.887 13345.513 - 13405.091: 97.7858% ( 13) 00:09:03.887 13405.091 - 13464.669: 97.8944% ( 13) 00:09:03.887 13464.669 - 13524.247: 97.9529% ( 7) 00:09:03.887 13524.247 - 13583.825: 98.0699% ( 14) 00:09:03.887 13583.825 - 13643.404: 98.1785% ( 13) 00:09:03.887 13643.404 - 13702.982: 98.2955% ( 14) 00:09:03.887 13702.982 - 13762.560: 98.3874% ( 11) 00:09:03.887 13762.560 - 13822.138: 98.4709% ( 10) 00:09:03.887 13822.138 - 13881.716: 98.5294% ( 7) 00:09:03.887 13881.716 - 13941.295: 98.5963% ( 8) 00:09:03.887 13941.295 - 14000.873: 98.6547% ( 7) 00:09:03.887 14000.873 - 14060.451: 98.6965% ( 5) 00:09:03.887 14060.451 - 14120.029: 98.7216% ( 3) 00:09:03.887 14120.029 - 14179.607: 98.7550% ( 4) 00:09:03.887 14179.607 - 14239.185: 98.7801% ( 3) 00:09:03.887 14239.185 - 14298.764: 98.8051% ( 3) 00:09:03.887 14298.764 - 14358.342: 98.8302% ( 3) 00:09:03.887 14358.342 - 14417.920: 98.8636% ( 4) 00:09:03.887 14417.920 - 14477.498: 98.8887% ( 3) 00:09:03.887 14477.498 - 14537.076: 98.8971% ( 1) 00:09:03.887 14537.076 - 14596.655: 98.9305% ( 4) 00:09:03.887 17635.142 - 17754.298: 98.9555% ( 3) 00:09:03.887 17754.298 - 17873.455: 99.0057% ( 6) 00:09:03.887 17873.455 - 17992.611: 99.0475% ( 5) 00:09:03.887 17992.611 - 18111.767: 99.0976% ( 6) 00:09:03.887 18111.767 - 18230.924: 99.1394% ( 5) 00:09:03.887 18230.924 - 18350.080: 99.1895% ( 6) 00:09:03.887 18350.080 - 18469.236: 99.2396% ( 6) 00:09:03.887 18469.236 - 18588.393: 99.2814% ( 5) 00:09:03.887 18588.393 - 18707.549: 99.3399% ( 7) 00:09:03.887 18707.549 - 18826.705: 99.3733% ( 4) 00:09:03.887 18826.705 - 18945.862: 99.4235% ( 6) 00:09:03.887 18945.862 - 19065.018: 99.4652% ( 5) 00:09:03.887 24427.055 - 24546.211: 99.4987% ( 4) 00:09:03.887 24546.211 - 24665.367: 99.5572% ( 7) 00:09:03.887 24665.367 - 24784.524: 99.6073% ( 6) 00:09:03.887 24784.524 - 24903.680: 99.6407% ( 4) 00:09:03.887 24903.680 - 25022.836: 99.6992% ( 7) 00:09:03.887 25022.836 - 25141.993: 99.7410% ( 5) 00:09:03.887 25141.993 - 25261.149: 99.7744% ( 4) 00:09:03.887 25261.149 - 25380.305: 99.8329% ( 7) 00:09:03.887 25380.305 - 25499.462: 99.8914% ( 7) 00:09:03.887 25499.462 - 25618.618: 99.9332% ( 5) 00:09:03.887 25618.618 - 25737.775: 99.9749% ( 5) 00:09:03.887 25737.775 - 25856.931: 100.0000% ( 3) 00:09:03.887 00:09:03.887 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:03.887 ============================================================================== 00:09:03.887 Range in us Cumulative IO count 00:09:03.887 4081.105 - 4110.895: 0.0585% ( 7) 00:09:03.887 4110.895 - 4140.684: 0.1671% ( 13) 00:09:03.887 4140.684 - 4170.473: 0.1755% ( 1) 00:09:03.887 4170.473 - 4200.262: 0.2005% ( 3) 00:09:03.887 4200.262 - 4230.051: 0.2172% ( 2) 00:09:03.887 4230.051 - 4259.840: 0.2340% ( 2) 00:09:03.887 4259.840 - 4289.629: 0.2507% ( 2) 00:09:03.887 4289.629 - 4319.418: 0.2674% ( 2) 00:09:03.887 4319.418 - 4349.207: 0.2841% ( 2) 00:09:03.887 4349.207 - 4378.996: 0.3008% ( 2) 00:09:03.887 4378.996 - 4408.785: 0.3092% ( 1) 00:09:03.887 4408.785 - 4438.575: 0.3259% ( 2) 00:09:03.887 4438.575 - 4468.364: 0.3426% ( 2) 00:09:03.887 4468.364 - 4498.153: 0.3509% ( 1) 00:09:03.887 4498.153 - 4527.942: 0.3676% ( 2) 00:09:03.887 4527.942 - 4557.731: 0.3844% ( 2) 00:09:03.887 4557.731 - 4587.520: 0.3927% ( 1) 00:09:03.887 4587.520 - 4617.309: 0.4094% ( 2) 00:09:03.887 4617.309 - 4647.098: 0.4261% ( 2) 00:09:03.887 4647.098 - 4676.887: 0.4428% ( 2) 00:09:03.887 4676.887 - 4706.676: 0.4596% ( 2) 00:09:03.887 4706.676 - 4736.465: 0.4679% ( 1) 00:09:03.887 4736.465 - 4766.255: 0.4846% ( 2) 00:09:03.887 4766.255 - 4796.044: 0.5097% ( 3) 00:09:03.887 4796.044 - 4825.833: 0.5264% ( 2) 00:09:03.887 4825.833 - 4855.622: 0.5348% ( 1) 00:09:03.887 7328.116 - 7357.905: 0.5431% ( 1) 00:09:03.887 7447.273 - 7477.062: 0.5765% ( 4) 00:09:03.887 7477.062 - 7506.851: 0.6601% ( 10) 00:09:03.887 7506.851 - 7536.640: 0.7269% ( 8) 00:09:03.887 7536.640 - 7566.429: 0.7687% ( 5) 00:09:03.887 7566.429 - 7596.218: 0.7854% ( 2) 00:09:03.887 7596.218 - 7626.007: 0.8021% ( 2) 00:09:03.887 7626.007 - 7685.585: 0.8356% ( 4) 00:09:03.887 7685.585 - 7745.164: 0.8606% ( 3) 00:09:03.887 7745.164 - 7804.742: 0.8941% ( 4) 00:09:03.887 7804.742 - 7864.320: 0.9191% ( 3) 00:09:03.887 7864.320 - 7923.898: 0.9525% ( 4) 00:09:03.887 7923.898 - 7983.476: 0.9860% ( 4) 00:09:03.887 7983.476 - 8043.055: 1.0194% ( 4) 00:09:03.887 8043.055 - 8102.633: 1.0528% ( 4) 00:09:03.887 8102.633 - 8162.211: 1.0695% ( 2) 00:09:03.887 9055.884 - 9115.462: 1.0779% ( 1) 00:09:03.887 9175.040 - 9234.618: 1.1698% ( 11) 00:09:03.887 9234.618 - 9294.196: 1.3620% ( 23) 00:09:03.887 9294.196 - 9353.775: 1.6544% ( 35) 00:09:03.887 9353.775 - 9413.353: 2.1892% ( 64) 00:09:03.887 9413.353 - 9472.931: 2.9078% ( 86) 00:09:03.887 9472.931 - 9532.509: 3.8269% ( 110) 00:09:03.887 9532.509 - 9592.087: 4.8212% ( 119) 00:09:03.887 9592.087 - 9651.665: 6.1832% ( 163) 00:09:03.887 9651.665 - 9711.244: 8.1969% ( 241) 00:09:03.887 9711.244 - 9770.822: 10.9459% ( 329) 00:09:03.887 9770.822 - 9830.400: 13.8787% ( 351) 00:09:03.887 9830.400 - 9889.978: 16.8366% ( 354) 00:09:03.887 9889.978 - 9949.556: 20.2289% ( 406) 00:09:03.887 9949.556 - 10009.135: 23.6798% ( 413) 00:09:03.887 10009.135 - 10068.713: 27.2894% ( 432) 00:09:03.887 10068.713 - 10128.291: 31.1915% ( 467) 00:09:03.887 10128.291 - 10187.869: 35.4111% ( 505) 00:09:03.887 10187.869 - 10247.447: 39.7309% ( 517) 00:09:03.887 10247.447 - 10307.025: 44.0759% ( 520) 00:09:03.887 10307.025 - 10366.604: 48.2620% ( 501) 00:09:03.887 10366.604 - 10426.182: 52.4649% ( 503) 00:09:03.887 10426.182 - 10485.760: 56.3586% ( 466) 00:09:03.887 10485.760 - 10545.338: 59.8930% ( 423) 00:09:03.887 10545.338 - 10604.916: 63.2854% ( 406) 00:09:03.887 10604.916 - 10664.495: 66.5525% ( 391) 00:09:03.887 10664.495 - 10724.073: 69.4769% ( 350) 00:09:03.887 10724.073 - 10783.651: 72.1424% ( 319) 00:09:03.887 10783.651 - 10843.229: 74.7326% ( 310) 00:09:03.887 10843.229 - 10902.807: 76.9803% ( 269) 00:09:03.887 10902.807 - 10962.385: 79.1026% ( 254) 00:09:03.887 10962.385 - 11021.964: 80.9325% ( 219) 00:09:03.887 11021.964 - 11081.542: 82.4198% ( 178) 00:09:03.887 11081.542 - 11141.120: 83.7650% ( 161) 00:09:03.887 11141.120 - 11200.698: 84.7092% ( 113) 00:09:03.887 11200.698 - 11260.276: 85.5448% ( 100) 00:09:03.887 11260.276 - 11319.855: 86.3135% ( 92) 00:09:03.887 11319.855 - 11379.433: 86.8900% ( 69) 00:09:03.887 11379.433 - 11439.011: 87.3997% ( 61) 00:09:03.887 11439.011 - 11498.589: 87.8593% ( 55) 00:09:03.887 11498.589 - 11558.167: 88.3356% ( 57) 00:09:03.887 11558.167 - 11617.745: 88.8787% ( 65) 00:09:03.887 11617.745 - 11677.324: 89.3215% ( 53) 00:09:03.887 11677.324 - 11736.902: 89.6975% ( 45) 00:09:03.887 11736.902 - 11796.480: 90.1070% ( 49) 00:09:03.887 11796.480 - 11856.058: 90.4412% ( 40) 00:09:03.887 11856.058 - 11915.636: 90.7420% ( 36) 00:09:03.887 11915.636 - 11975.215: 91.1430% ( 48) 00:09:03.887 11975.215 - 12034.793: 91.5274% ( 46) 00:09:03.887 12034.793 - 12094.371: 91.8282% ( 36) 00:09:03.887 12094.371 - 12153.949: 92.1290% ( 36) 00:09:03.887 12153.949 - 12213.527: 92.4883% ( 43) 00:09:03.887 12213.527 - 12273.105: 92.8977% ( 49) 00:09:03.887 12273.105 - 12332.684: 93.2069% ( 37) 00:09:03.887 12332.684 - 12392.262: 93.5160% ( 37) 00:09:03.887 12392.262 - 12451.840: 93.8168% ( 36) 00:09:03.887 12451.840 - 12511.418: 94.1511% ( 40) 00:09:03.887 12511.418 - 12570.996: 94.4853% ( 40) 00:09:03.887 12570.996 - 12630.575: 94.7276% ( 29) 00:09:03.887 12630.575 - 12690.153: 94.9616% ( 28) 00:09:03.887 12690.153 - 12749.731: 95.2039% ( 29) 00:09:03.887 12749.731 - 12809.309: 95.5130% ( 37) 00:09:03.887 12809.309 - 12868.887: 95.7804% ( 32) 00:09:03.887 12868.887 - 12928.465: 96.0812% ( 36) 00:09:03.887 12928.465 - 12988.044: 96.3653% ( 34) 00:09:03.887 12988.044 - 13047.622: 96.6410% ( 33) 00:09:03.887 13047.622 - 13107.200: 96.8416% ( 24) 00:09:03.887 13107.200 - 13166.778: 97.0254% ( 22) 00:09:03.887 13166.778 - 13226.356: 97.2009% ( 21) 00:09:03.887 13226.356 - 13285.935: 97.3596% ( 19) 00:09:03.888 13285.935 - 13345.513: 97.4850% ( 15) 00:09:03.888 13345.513 - 13405.091: 97.6019% ( 14) 00:09:03.888 13405.091 - 13464.669: 97.7189% ( 14) 00:09:03.888 13464.669 - 13524.247: 97.8443% ( 15) 00:09:03.888 13524.247 - 13583.825: 97.9779% ( 16) 00:09:03.888 13583.825 - 13643.404: 98.0949% ( 14) 00:09:03.888 13643.404 - 13702.982: 98.2370% ( 17) 00:09:03.888 13702.982 - 13762.560: 98.3623% ( 15) 00:09:03.888 13762.560 - 13822.138: 98.4876% ( 15) 00:09:03.888 13822.138 - 13881.716: 98.5963% ( 13) 00:09:03.888 13881.716 - 13941.295: 98.6380% ( 5) 00:09:03.888 13941.295 - 14000.873: 98.6882% ( 6) 00:09:03.888 14000.873 - 14060.451: 98.7216% ( 4) 00:09:03.888 14060.451 - 14120.029: 98.7383% ( 2) 00:09:03.888 14120.029 - 14179.607: 98.7634% ( 3) 00:09:03.888 14179.607 - 14239.185: 98.7801% ( 2) 00:09:03.888 14239.185 - 14298.764: 98.8051% ( 3) 00:09:03.888 14298.764 - 14358.342: 98.8386% ( 4) 00:09:03.888 14358.342 - 14417.920: 98.8636% ( 3) 00:09:03.888 14417.920 - 14477.498: 98.8803% ( 2) 00:09:03.888 14477.498 - 14537.076: 98.9054% ( 3) 00:09:03.888 14537.076 - 14596.655: 98.9305% ( 3) 00:09:03.888 16562.735 - 16681.891: 98.9555% ( 3) 00:09:03.888 16681.891 - 16801.047: 98.9973% ( 5) 00:09:03.888 16801.047 - 16920.204: 99.0475% ( 6) 00:09:03.888 16920.204 - 17039.360: 99.0809% ( 4) 00:09:03.888 17039.360 - 17158.516: 99.1394% ( 7) 00:09:03.888 17158.516 - 17277.673: 99.1895% ( 6) 00:09:03.888 17277.673 - 17396.829: 99.2480% ( 7) 00:09:03.888 17396.829 - 17515.985: 99.2981% ( 6) 00:09:03.888 17515.985 - 17635.142: 99.3232% ( 3) 00:09:03.888 17635.142 - 17754.298: 99.3650% ( 5) 00:09:03.888 17754.298 - 17873.455: 99.4235% ( 7) 00:09:03.888 17873.455 - 17992.611: 99.4569% ( 4) 00:09:03.888 17992.611 - 18111.767: 99.4652% ( 1) 00:09:03.888 23592.960 - 23712.116: 99.5070% ( 5) 00:09:03.888 23712.116 - 23831.273: 99.5572% ( 6) 00:09:03.888 23831.273 - 23950.429: 99.6073% ( 6) 00:09:03.888 23950.429 - 24069.585: 99.6491% ( 5) 00:09:03.888 24069.585 - 24188.742: 99.6992% ( 6) 00:09:03.888 24188.742 - 24307.898: 99.7493% ( 6) 00:09:03.888 24307.898 - 24427.055: 99.7995% ( 6) 00:09:03.888 24427.055 - 24546.211: 99.8496% ( 6) 00:09:03.888 24546.211 - 24665.367: 99.8914% ( 5) 00:09:03.888 24665.367 - 24784.524: 99.9415% ( 6) 00:09:03.888 24784.524 - 24903.680: 99.9916% ( 6) 00:09:03.888 24903.680 - 25022.836: 100.0000% ( 1) 00:09:03.888 00:09:03.888 05:55:55 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:03.888 00:09:03.888 real 0m2.571s 00:09:03.888 user 0m2.214s 00:09:03.888 sys 0m0.247s 00:09:03.888 05:55:55 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:03.888 05:55:55 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:09:03.888 ************************************ 00:09:03.888 END TEST nvme_perf 00:09:03.888 ************************************ 00:09:03.888 05:55:55 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:03.888 05:55:55 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:03.888 05:55:55 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:03.888 05:55:55 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.888 05:55:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:03.888 ************************************ 00:09:03.888 START TEST nvme_hello_world 00:09:03.888 ************************************ 00:09:03.888 05:55:55 nvme.nvme_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:04.148 Initializing NVMe Controllers 00:09:04.148 Attached to 0000:00:10.0 00:09:04.148 Namespace ID: 1 size: 6GB 00:09:04.148 Attached to 0000:00:11.0 00:09:04.148 Namespace ID: 1 size: 5GB 00:09:04.148 Attached to 0000:00:13.0 00:09:04.148 Namespace ID: 1 size: 1GB 00:09:04.148 Attached to 0000:00:12.0 00:09:04.148 Namespace ID: 1 size: 4GB 00:09:04.148 Namespace ID: 2 size: 4GB 00:09:04.148 Namespace ID: 3 size: 4GB 00:09:04.148 Initialization complete. 00:09:04.148 INFO: using host memory buffer for IO 00:09:04.148 Hello world! 00:09:04.148 INFO: using host memory buffer for IO 00:09:04.148 Hello world! 00:09:04.148 INFO: using host memory buffer for IO 00:09:04.148 Hello world! 00:09:04.148 INFO: using host memory buffer for IO 00:09:04.148 Hello world! 00:09:04.148 INFO: using host memory buffer for IO 00:09:04.148 Hello world! 00:09:04.148 INFO: using host memory buffer for IO 00:09:04.148 Hello world! 00:09:04.148 00:09:04.148 real 0m0.263s 00:09:04.148 user 0m0.092s 00:09:04.148 sys 0m0.125s 00:09:04.148 05:55:55 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:04.148 05:55:55 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:04.148 ************************************ 00:09:04.148 END TEST nvme_hello_world 00:09:04.148 ************************************ 00:09:04.148 05:55:55 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:04.148 05:55:55 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:04.148 05:55:55 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:04.148 05:55:55 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.148 05:55:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:04.148 ************************************ 00:09:04.148 START TEST nvme_sgl 00:09:04.148 ************************************ 00:09:04.148 05:55:55 nvme.nvme_sgl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:04.407 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:09:04.407 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:09:04.407 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:09:04.407 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:09:04.407 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:09:04.407 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:09:04.407 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:09:04.407 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:09:04.407 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:09:04.407 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:09:04.407 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:09:04.407 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:09:04.407 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:09:04.407 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:09:04.407 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:09:04.407 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:09:04.407 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:09:04.407 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:09:04.407 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:09:04.407 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:09:04.407 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:09:04.407 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:09:04.407 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:09:04.407 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:09:04.407 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:09:04.407 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:09:04.407 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:09:04.407 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:09:04.407 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:09:04.407 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:09:04.407 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:09:04.407 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:09:04.407 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:09:04.407 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:09:04.407 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:09:04.407 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:09:04.407 NVMe Readv/Writev Request test 00:09:04.407 Attached to 0000:00:10.0 00:09:04.407 Attached to 0000:00:11.0 00:09:04.407 Attached to 0000:00:13.0 00:09:04.407 Attached to 0000:00:12.0 00:09:04.407 0000:00:10.0: build_io_request_2 test passed 00:09:04.407 0000:00:10.0: build_io_request_4 test passed 00:09:04.407 0000:00:10.0: build_io_request_5 test passed 00:09:04.407 0000:00:10.0: build_io_request_6 test passed 00:09:04.407 0000:00:10.0: build_io_request_7 test passed 00:09:04.407 0000:00:10.0: build_io_request_10 test passed 00:09:04.407 0000:00:11.0: build_io_request_2 test passed 00:09:04.407 0000:00:11.0: build_io_request_4 test passed 00:09:04.407 0000:00:11.0: build_io_request_5 test passed 00:09:04.407 0000:00:11.0: build_io_request_6 test passed 00:09:04.407 0000:00:11.0: build_io_request_7 test passed 00:09:04.407 0000:00:11.0: build_io_request_10 test passed 00:09:04.407 Cleaning up... 00:09:04.407 00:09:04.407 real 0m0.325s 00:09:04.407 user 0m0.175s 00:09:04.407 sys 0m0.103s 00:09:04.407 05:55:56 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:04.407 05:55:56 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:09:04.407 ************************************ 00:09:04.407 END TEST nvme_sgl 00:09:04.407 ************************************ 00:09:04.407 05:55:56 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:04.407 05:55:56 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:04.407 05:55:56 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:04.407 05:55:56 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.407 05:55:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:04.666 ************************************ 00:09:04.666 START TEST nvme_e2edp 00:09:04.666 ************************************ 00:09:04.666 05:55:56 nvme.nvme_e2edp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:04.666 NVMe Write/Read with End-to-End data protection test 00:09:04.666 Attached to 0000:00:10.0 00:09:04.666 Attached to 0000:00:11.0 00:09:04.666 Attached to 0000:00:13.0 00:09:04.666 Attached to 0000:00:12.0 00:09:04.666 Cleaning up... 00:09:04.666 00:09:04.666 real 0m0.250s 00:09:04.666 user 0m0.100s 00:09:04.666 sys 0m0.103s 00:09:04.666 ************************************ 00:09:04.666 END TEST nvme_e2edp 00:09:04.666 ************************************ 00:09:04.666 05:55:56 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:04.666 05:55:56 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:09:04.926 05:55:56 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:04.926 05:55:56 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:04.926 05:55:56 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:04.926 05:55:56 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.926 05:55:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:04.926 ************************************ 00:09:04.926 START TEST nvme_reserve 00:09:04.926 ************************************ 00:09:04.926 05:55:56 nvme.nvme_reserve -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:05.186 ===================================================== 00:09:05.186 NVMe Controller at PCI bus 0, device 16, function 0 00:09:05.186 ===================================================== 00:09:05.186 Reservations: Not Supported 00:09:05.186 ===================================================== 00:09:05.186 NVMe Controller at PCI bus 0, device 17, function 0 00:09:05.186 ===================================================== 00:09:05.186 Reservations: Not Supported 00:09:05.186 ===================================================== 00:09:05.186 NVMe Controller at PCI bus 0, device 19, function 0 00:09:05.186 ===================================================== 00:09:05.186 Reservations: Not Supported 00:09:05.186 ===================================================== 00:09:05.186 NVMe Controller at PCI bus 0, device 18, function 0 00:09:05.186 ===================================================== 00:09:05.186 Reservations: Not Supported 00:09:05.186 Reservation test passed 00:09:05.186 00:09:05.186 real 0m0.249s 00:09:05.186 user 0m0.101s 00:09:05.186 sys 0m0.106s 00:09:05.186 ************************************ 00:09:05.186 END TEST nvme_reserve 00:09:05.186 ************************************ 00:09:05.186 05:55:56 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:05.186 05:55:56 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:09:05.186 05:55:56 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:05.186 05:55:56 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:05.186 05:55:56 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:05.186 05:55:56 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.186 05:55:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:05.186 ************************************ 00:09:05.186 START TEST nvme_err_injection 00:09:05.186 ************************************ 00:09:05.186 05:55:56 nvme.nvme_err_injection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:05.445 NVMe Error Injection test 00:09:05.445 Attached to 0000:00:10.0 00:09:05.445 Attached to 0000:00:11.0 00:09:05.445 Attached to 0000:00:13.0 00:09:05.445 Attached to 0000:00:12.0 00:09:05.445 0000:00:11.0: get features failed as expected 00:09:05.445 0000:00:13.0: get features failed as expected 00:09:05.445 0000:00:12.0: get features failed as expected 00:09:05.445 0000:00:10.0: get features failed as expected 00:09:05.445 0000:00:10.0: get features successfully as expected 00:09:05.445 0000:00:11.0: get features successfully as expected 00:09:05.445 0000:00:13.0: get features successfully as expected 00:09:05.445 0000:00:12.0: get features successfully as expected 00:09:05.445 0000:00:11.0: read failed as expected 00:09:05.445 0000:00:10.0: read failed as expected 00:09:05.445 0000:00:13.0: read failed as expected 00:09:05.445 0000:00:12.0: read failed as expected 00:09:05.445 0000:00:10.0: read successfully as expected 00:09:05.445 0000:00:11.0: read successfully as expected 00:09:05.445 0000:00:13.0: read successfully as expected 00:09:05.445 0000:00:12.0: read successfully as expected 00:09:05.445 Cleaning up... 00:09:05.445 ************************************ 00:09:05.445 END TEST nvme_err_injection 00:09:05.445 ************************************ 00:09:05.445 00:09:05.445 real 0m0.270s 00:09:05.445 user 0m0.093s 00:09:05.445 sys 0m0.126s 00:09:05.445 05:55:57 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:05.445 05:55:57 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:09:05.445 05:55:57 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:05.445 05:55:57 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:05.445 05:55:57 nvme -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:09:05.445 05:55:57 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.445 05:55:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:05.445 ************************************ 00:09:05.445 START TEST nvme_overhead 00:09:05.445 ************************************ 00:09:05.445 05:55:57 nvme.nvme_overhead -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:06.823 Initializing NVMe Controllers 00:09:06.823 Attached to 0000:00:10.0 00:09:06.823 Attached to 0000:00:11.0 00:09:06.823 Attached to 0000:00:13.0 00:09:06.823 Attached to 0000:00:12.0 00:09:06.823 Initialization complete. Launching workers. 00:09:06.823 submit (in ns) avg, min, max = 17315.8, 13992.7, 98847.3 00:09:06.823 complete (in ns) avg, min, max = 12178.3, 9518.2, 77263.6 00:09:06.823 00:09:06.823 Submit histogram 00:09:06.823 ================ 00:09:06.823 Range in us Cumulative Count 00:09:06.823 13.964 - 14.022: 0.0110% ( 1) 00:09:06.823 14.080 - 14.138: 0.0219% ( 1) 00:09:06.823 14.138 - 14.196: 0.0439% ( 2) 00:09:06.823 14.255 - 14.313: 0.0768% ( 3) 00:09:06.823 14.313 - 14.371: 0.1536% ( 7) 00:09:06.823 14.371 - 14.429: 0.2963% ( 13) 00:09:06.823 14.429 - 14.487: 0.5157% ( 20) 00:09:06.823 14.487 - 14.545: 0.6913% ( 16) 00:09:06.823 14.545 - 14.604: 0.8888% ( 18) 00:09:06.823 14.604 - 14.662: 1.0095% ( 11) 00:09:06.823 14.662 - 14.720: 1.2400% ( 21) 00:09:06.823 14.720 - 14.778: 1.6131% ( 34) 00:09:06.823 14.778 - 14.836: 2.4800% ( 79) 00:09:06.823 14.836 - 14.895: 3.3359% ( 78) 00:09:06.823 14.895 - 15.011: 5.4318% ( 191) 00:09:06.823 15.011 - 15.127: 8.2300% ( 255) 00:09:06.823 15.127 - 15.244: 14.3860% ( 561) 00:09:06.823 15.244 - 15.360: 25.5898% ( 1021) 00:09:06.823 15.360 - 15.476: 38.3957% ( 1167) 00:09:06.823 15.476 - 15.593: 46.5599% ( 744) 00:09:06.823 15.593 - 15.709: 50.8395% ( 390) 00:09:06.823 15.709 - 15.825: 53.2865% ( 223) 00:09:06.823 15.825 - 15.942: 55.3056% ( 184) 00:09:06.823 15.942 - 16.058: 56.8090% ( 137) 00:09:06.823 16.058 - 16.175: 57.8734% ( 97) 00:09:06.823 16.175 - 16.291: 59.4206% ( 141) 00:09:06.823 16.291 - 16.407: 61.1215% ( 155) 00:09:06.823 16.407 - 16.524: 62.2846% ( 106) 00:09:06.823 16.524 - 16.640: 63.0747% ( 72) 00:09:06.823 16.640 - 16.756: 63.6453% ( 52) 00:09:06.823 16.756 - 16.873: 64.0514% ( 37) 00:09:06.823 16.873 - 16.989: 64.3367% ( 26) 00:09:06.823 16.989 - 17.105: 64.6220% ( 26) 00:09:06.823 17.105 - 17.222: 64.8305% ( 19) 00:09:06.823 17.222 - 17.338: 64.9731% ( 13) 00:09:06.823 17.338 - 17.455: 65.1158% ( 13) 00:09:06.823 17.455 - 17.571: 65.2036% ( 8) 00:09:06.823 17.571 - 17.687: 65.2474% ( 4) 00:09:06.823 17.687 - 17.804: 65.2584% ( 1) 00:09:06.823 17.804 - 17.920: 65.9058% ( 59) 00:09:06.823 17.920 - 18.036: 69.0003% ( 282) 00:09:06.823 18.036 - 18.153: 74.8491% ( 533) 00:09:06.823 18.153 - 18.269: 79.2275% ( 399) 00:09:06.823 18.269 - 18.385: 81.6526% ( 221) 00:09:06.823 18.385 - 18.502: 83.7156% ( 188) 00:09:06.823 18.502 - 18.618: 84.8568% ( 104) 00:09:06.823 18.618 - 18.735: 85.5701% ( 65) 00:09:06.823 18.735 - 18.851: 86.0529% ( 44) 00:09:06.823 18.851 - 18.967: 86.5577% ( 46) 00:09:06.823 18.967 - 19.084: 87.3148% ( 69) 00:09:06.823 19.084 - 19.200: 88.1817% ( 79) 00:09:06.823 19.200 - 19.316: 88.7523% ( 52) 00:09:06.823 19.316 - 19.433: 89.3010% ( 50) 00:09:06.823 19.433 - 19.549: 89.8058% ( 46) 00:09:06.823 19.549 - 19.665: 89.9813% ( 16) 00:09:06.823 19.665 - 19.782: 90.1021% ( 11) 00:09:06.823 19.782 - 19.898: 90.2447% ( 13) 00:09:06.823 19.898 - 20.015: 90.3544% ( 10) 00:09:06.823 20.015 - 20.131: 90.4861% ( 12) 00:09:06.823 20.131 - 20.247: 90.5849% ( 9) 00:09:06.823 20.247 - 20.364: 90.6397% ( 5) 00:09:06.823 20.364 - 20.480: 90.8482% ( 19) 00:09:06.823 20.480 - 20.596: 90.9141% ( 6) 00:09:06.823 20.596 - 20.713: 90.9909% ( 7) 00:09:06.823 20.713 - 20.829: 91.1116% ( 11) 00:09:06.823 20.829 - 20.945: 91.1884% ( 7) 00:09:06.823 20.945 - 21.062: 91.2762% ( 8) 00:09:06.823 21.062 - 21.178: 91.3530% ( 7) 00:09:06.823 21.178 - 21.295: 91.4518% ( 9) 00:09:06.823 21.295 - 21.411: 91.6054% ( 14) 00:09:06.823 21.411 - 21.527: 91.8029% ( 18) 00:09:06.823 21.527 - 21.644: 91.9895% ( 17) 00:09:06.823 21.644 - 21.760: 92.1541% ( 15) 00:09:06.823 21.760 - 21.876: 92.2638% ( 10) 00:09:06.823 21.876 - 21.993: 92.4065% ( 13) 00:09:06.823 21.993 - 22.109: 92.5272% ( 11) 00:09:06.823 22.109 - 22.225: 92.6479% ( 11) 00:09:06.823 22.225 - 22.342: 92.7795% ( 12) 00:09:06.823 22.342 - 22.458: 92.9222% ( 13) 00:09:06.823 22.458 - 22.575: 92.9880% ( 6) 00:09:06.823 22.575 - 22.691: 93.0649% ( 7) 00:09:06.823 22.691 - 22.807: 93.1636% ( 9) 00:09:06.823 22.807 - 22.924: 93.2624% ( 9) 00:09:06.823 22.924 - 23.040: 93.3502% ( 8) 00:09:06.823 23.040 - 23.156: 93.4818% ( 12) 00:09:06.823 23.156 - 23.273: 93.5696% ( 8) 00:09:06.823 23.273 - 23.389: 93.7123% ( 13) 00:09:06.823 23.389 - 23.505: 93.8988% ( 17) 00:09:06.823 23.505 - 23.622: 93.9427% ( 4) 00:09:06.823 23.622 - 23.738: 94.0086% ( 6) 00:09:06.823 23.738 - 23.855: 94.0963% ( 8) 00:09:06.823 23.855 - 23.971: 94.1622% ( 6) 00:09:06.823 23.971 - 24.087: 94.2280% ( 6) 00:09:06.823 24.087 - 24.204: 94.3487% ( 11) 00:09:06.823 24.204 - 24.320: 94.4365% ( 8) 00:09:06.823 24.320 - 24.436: 94.5572% ( 11) 00:09:06.823 24.436 - 24.553: 94.6011% ( 4) 00:09:06.823 24.553 - 24.669: 94.6889% ( 8) 00:09:06.824 24.669 - 24.785: 94.7438% ( 5) 00:09:06.824 24.785 - 24.902: 94.8316% ( 8) 00:09:06.824 24.902 - 25.018: 94.9193% ( 8) 00:09:06.824 25.018 - 25.135: 95.0291% ( 10) 00:09:06.824 25.135 - 25.251: 95.1059% ( 7) 00:09:06.824 25.251 - 25.367: 95.2485% ( 13) 00:09:06.824 25.367 - 25.484: 95.3583% ( 10) 00:09:06.824 25.484 - 25.600: 95.4570% ( 9) 00:09:06.824 25.600 - 25.716: 95.5777% ( 11) 00:09:06.824 25.716 - 25.833: 95.6765% ( 9) 00:09:06.824 25.833 - 25.949: 95.7314% ( 5) 00:09:06.824 25.949 - 26.065: 95.7753% ( 4) 00:09:06.824 26.065 - 26.182: 95.8521% ( 7) 00:09:06.824 26.182 - 26.298: 95.9289% ( 7) 00:09:06.824 26.298 - 26.415: 95.9838% ( 5) 00:09:06.824 26.415 - 26.531: 96.0496% ( 6) 00:09:06.824 26.531 - 26.647: 96.1045% ( 5) 00:09:06.824 26.647 - 26.764: 96.1593% ( 5) 00:09:06.824 26.764 - 26.880: 96.1813% ( 2) 00:09:06.824 26.880 - 26.996: 96.2252% ( 4) 00:09:06.824 26.996 - 27.113: 96.2691% ( 4) 00:09:06.824 27.113 - 27.229: 96.3459% ( 7) 00:09:06.824 27.229 - 27.345: 96.3898% ( 4) 00:09:06.824 27.345 - 27.462: 96.4007% ( 1) 00:09:06.824 27.462 - 27.578: 96.4117% ( 1) 00:09:06.824 27.578 - 27.695: 96.4885% ( 7) 00:09:06.824 27.695 - 27.811: 96.5653% ( 7) 00:09:06.824 27.811 - 27.927: 96.5983% ( 3) 00:09:06.824 27.927 - 28.044: 96.6202% ( 2) 00:09:06.824 28.044 - 28.160: 96.6970% ( 7) 00:09:06.824 28.160 - 28.276: 96.7190% ( 2) 00:09:06.824 28.276 - 28.393: 96.7738% ( 5) 00:09:06.824 28.393 - 28.509: 96.7848% ( 1) 00:09:06.824 28.509 - 28.625: 96.8287% ( 4) 00:09:06.824 28.625 - 28.742: 96.8616% ( 3) 00:09:06.824 28.742 - 28.858: 96.9384% ( 7) 00:09:06.824 28.858 - 28.975: 96.9494% ( 1) 00:09:06.824 28.975 - 29.091: 96.9604% ( 1) 00:09:06.824 29.091 - 29.207: 96.9714% ( 1) 00:09:06.824 29.207 - 29.324: 96.9933% ( 2) 00:09:06.824 29.324 - 29.440: 97.0153% ( 2) 00:09:06.824 29.440 - 29.556: 97.0811% ( 6) 00:09:06.824 29.556 - 29.673: 97.1579% ( 7) 00:09:06.824 29.673 - 29.789: 97.2128% ( 5) 00:09:06.824 29.789 - 30.022: 97.4432% ( 21) 00:09:06.824 30.022 - 30.255: 97.6627% ( 20) 00:09:06.824 30.255 - 30.487: 98.1126% ( 41) 00:09:06.824 30.487 - 30.720: 98.5735% ( 42) 00:09:06.824 30.720 - 30.953: 98.8917% ( 29) 00:09:06.824 30.953 - 31.185: 99.0234% ( 12) 00:09:06.824 31.185 - 31.418: 99.0892% ( 6) 00:09:06.824 31.418 - 31.651: 99.2099% ( 11) 00:09:06.824 31.651 - 31.884: 99.2758% ( 6) 00:09:06.824 31.884 - 32.116: 99.3306% ( 5) 00:09:06.824 32.116 - 32.349: 99.3526% ( 2) 00:09:06.824 32.349 - 32.582: 99.3635% ( 1) 00:09:06.824 32.582 - 32.815: 99.4074% ( 4) 00:09:06.824 33.513 - 33.745: 99.4184% ( 1) 00:09:06.824 33.745 - 33.978: 99.4404% ( 2) 00:09:06.824 34.909 - 35.142: 99.4623% ( 2) 00:09:06.824 35.375 - 35.607: 99.4733% ( 1) 00:09:06.824 35.607 - 35.840: 99.4843% ( 1) 00:09:06.824 36.073 - 36.305: 99.5062% ( 2) 00:09:06.824 36.305 - 36.538: 99.5281% ( 2) 00:09:06.824 36.538 - 36.771: 99.5720% ( 4) 00:09:06.824 36.771 - 37.004: 99.5830% ( 1) 00:09:06.824 37.004 - 37.236: 99.6050% ( 2) 00:09:06.824 37.236 - 37.469: 99.6269% ( 2) 00:09:06.824 37.469 - 37.702: 99.6379% ( 1) 00:09:06.824 37.702 - 37.935: 99.6598% ( 2) 00:09:06.824 37.935 - 38.167: 99.6927% ( 3) 00:09:06.824 38.400 - 38.633: 99.7257% ( 3) 00:09:06.824 38.865 - 39.098: 99.7366% ( 1) 00:09:06.824 39.098 - 39.331: 99.7805% ( 4) 00:09:06.824 39.331 - 39.564: 99.7915% ( 1) 00:09:06.824 39.564 - 39.796: 99.8025% ( 1) 00:09:06.824 40.029 - 40.262: 99.8135% ( 1) 00:09:06.824 40.262 - 40.495: 99.8244% ( 1) 00:09:06.824 40.727 - 40.960: 99.8354% ( 1) 00:09:06.824 43.520 - 43.753: 99.8464% ( 1) 00:09:06.824 44.451 - 44.684: 99.8573% ( 1) 00:09:06.824 45.149 - 45.382: 99.8793% ( 2) 00:09:06.824 47.476 - 47.709: 99.9012% ( 2) 00:09:06.824 48.175 - 48.407: 99.9122% ( 1) 00:09:06.824 52.596 - 52.829: 99.9232% ( 1) 00:09:06.824 53.062 - 53.295: 99.9342% ( 1) 00:09:06.824 54.458 - 54.691: 99.9451% ( 1) 00:09:06.824 55.156 - 55.389: 99.9561% ( 1) 00:09:06.824 85.644 - 86.109: 99.9671% ( 1) 00:09:06.824 90.298 - 90.764: 99.9781% ( 1) 00:09:06.824 91.229 - 91.695: 99.9890% ( 1) 00:09:06.824 98.676 - 99.142: 100.0000% ( 1) 00:09:06.824 00:09:06.824 Complete histogram 00:09:06.824 ================== 00:09:06.824 Range in us Cumulative Count 00:09:06.824 9.484 - 9.542: 0.0329% ( 3) 00:09:06.824 9.542 - 9.600: 0.0549% ( 2) 00:09:06.824 9.600 - 9.658: 0.0768% ( 2) 00:09:06.824 9.658 - 9.716: 0.1097% ( 3) 00:09:06.824 9.716 - 9.775: 0.1536% ( 4) 00:09:06.824 9.775 - 9.833: 0.2743% ( 11) 00:09:06.824 9.833 - 9.891: 0.5048% ( 21) 00:09:06.824 9.891 - 9.949: 0.8120% ( 28) 00:09:06.824 9.949 - 10.007: 1.2400% ( 39) 00:09:06.824 10.007 - 10.065: 1.7338% ( 45) 00:09:06.824 10.065 - 10.124: 2.4141% ( 62) 00:09:06.824 10.124 - 10.182: 3.1493% ( 67) 00:09:06.824 10.182 - 10.240: 4.6527% ( 137) 00:09:06.824 10.240 - 10.298: 7.8569% ( 292) 00:09:06.824 10.298 - 10.356: 13.2777% ( 494) 00:09:06.824 10.356 - 10.415: 20.4543% ( 654) 00:09:06.824 10.415 - 10.473: 28.6294% ( 745) 00:09:06.824 10.473 - 10.531: 36.9362% ( 757) 00:09:06.824 10.531 - 10.589: 43.2569% ( 576) 00:09:06.824 10.589 - 10.647: 48.4802% ( 476) 00:09:06.824 10.647 - 10.705: 51.7283% ( 296) 00:09:06.824 10.705 - 10.764: 53.4840% ( 160) 00:09:06.824 10.764 - 10.822: 54.6143% ( 103) 00:09:06.824 10.822 - 10.880: 55.1849% ( 52) 00:09:06.824 10.880 - 10.938: 55.4483% ( 24) 00:09:06.824 10.938 - 10.996: 55.5690% ( 11) 00:09:06.824 10.996 - 11.055: 55.7994% ( 21) 00:09:06.824 11.055 - 11.113: 56.0298% ( 21) 00:09:06.824 11.113 - 11.171: 56.2713% ( 22) 00:09:06.824 11.171 - 11.229: 56.5895% ( 29) 00:09:06.824 11.229 - 11.287: 56.9077% ( 29) 00:09:06.824 11.287 - 11.345: 57.2918% ( 35) 00:09:06.824 11.345 - 11.404: 57.5990% ( 28) 00:09:06.824 11.404 - 11.462: 57.8514% ( 23) 00:09:06.824 11.462 - 11.520: 58.1038% ( 23) 00:09:06.824 11.520 - 11.578: 58.2355% ( 12) 00:09:06.824 11.578 - 11.636: 58.3781% ( 13) 00:09:06.824 11.636 - 11.695: 58.5208% ( 13) 00:09:06.824 11.695 - 11.753: 58.8610% ( 31) 00:09:06.824 11.753 - 11.811: 59.2121% ( 32) 00:09:06.824 11.811 - 11.869: 59.7718% ( 51) 00:09:06.824 11.869 - 11.927: 60.3204% ( 50) 00:09:06.824 11.927 - 11.985: 60.9788% ( 60) 00:09:06.824 11.985 - 12.044: 61.5275% ( 50) 00:09:06.824 12.044 - 12.102: 62.2078% ( 62) 00:09:06.824 12.102 - 12.160: 62.7236% ( 47) 00:09:06.824 12.160 - 12.218: 63.1954% ( 43) 00:09:06.824 12.218 - 12.276: 63.6234% ( 39) 00:09:06.824 12.276 - 12.335: 64.2818% ( 60) 00:09:06.824 12.335 - 12.393: 65.9607% ( 153) 00:09:06.824 12.393 - 12.451: 68.7040% ( 250) 00:09:06.824 12.451 - 12.509: 71.9412% ( 295) 00:09:06.824 12.509 - 12.567: 75.6941% ( 342) 00:09:06.824 12.567 - 12.625: 79.2385% ( 323) 00:09:06.824 12.625 - 12.684: 81.8721% ( 240) 00:09:06.824 12.684 - 12.742: 83.7814% ( 174) 00:09:06.824 12.742 - 12.800: 84.9117% ( 103) 00:09:06.824 12.800 - 12.858: 85.6469% ( 67) 00:09:06.824 12.858 - 12.916: 86.0968% ( 41) 00:09:06.824 12.916 - 12.975: 86.4918% ( 36) 00:09:06.824 12.975 - 13.033: 86.7223% ( 21) 00:09:06.824 13.033 - 13.091: 86.8978% ( 16) 00:09:06.824 13.091 - 13.149: 87.0405% ( 13) 00:09:06.824 13.149 - 13.207: 87.1393% ( 9) 00:09:06.824 13.207 - 13.265: 87.2490% ( 10) 00:09:06.824 13.265 - 13.324: 87.4026% ( 14) 00:09:06.824 13.324 - 13.382: 87.6879% ( 26) 00:09:06.824 13.382 - 13.440: 87.9184% ( 21) 00:09:06.824 13.440 - 13.498: 88.0610% ( 13) 00:09:06.824 13.498 - 13.556: 88.2366% ( 16) 00:09:06.824 13.556 - 13.615: 88.3573% ( 11) 00:09:06.824 13.615 - 13.673: 88.5109% ( 14) 00:09:06.824 13.673 - 13.731: 88.6536% ( 13) 00:09:06.824 13.731 - 13.789: 88.8401% ( 17) 00:09:06.824 13.789 - 13.847: 88.9279% ( 8) 00:09:06.824 13.847 - 13.905: 89.0815% ( 14) 00:09:06.824 13.905 - 13.964: 89.2242% ( 13) 00:09:06.824 13.964 - 14.022: 89.3668% ( 13) 00:09:06.824 14.022 - 14.080: 89.6631% ( 27) 00:09:06.824 14.080 - 14.138: 89.9484% ( 26) 00:09:06.824 14.138 - 14.196: 90.2337% ( 26) 00:09:06.824 14.196 - 14.255: 90.5629% ( 30) 00:09:06.824 14.255 - 14.313: 90.7605% ( 18) 00:09:06.824 14.313 - 14.371: 91.0787% ( 29) 00:09:06.824 14.371 - 14.429: 91.3201% ( 22) 00:09:06.824 14.429 - 14.487: 91.4408% ( 11) 00:09:06.824 14.487 - 14.545: 91.6273% ( 17) 00:09:06.824 14.545 - 14.604: 91.7261% ( 9) 00:09:06.824 14.604 - 14.662: 91.8578% ( 12) 00:09:06.824 14.662 - 14.720: 91.9785% ( 11) 00:09:06.824 14.720 - 14.778: 92.0882% ( 10) 00:09:06.824 14.778 - 14.836: 92.1321% ( 4) 00:09:06.824 14.836 - 14.895: 92.2199% ( 8) 00:09:06.824 14.895 - 15.011: 92.3516% ( 12) 00:09:06.824 15.011 - 15.127: 92.5381% ( 17) 00:09:06.824 15.127 - 15.244: 92.7795% ( 22) 00:09:06.824 15.244 - 15.360: 92.9332% ( 14) 00:09:06.824 15.360 - 15.476: 93.0210% ( 8) 00:09:06.824 15.476 - 15.593: 93.1526% ( 12) 00:09:06.824 15.593 - 15.709: 93.2075% ( 5) 00:09:06.824 15.709 - 15.825: 93.2404% ( 3) 00:09:06.824 15.825 - 15.942: 93.3502% ( 10) 00:09:06.824 15.942 - 16.058: 93.4379% ( 8) 00:09:06.824 16.058 - 16.175: 93.4818% ( 4) 00:09:06.824 16.175 - 16.291: 93.5148% ( 3) 00:09:06.824 16.291 - 16.407: 93.5367% ( 2) 00:09:06.824 16.407 - 16.524: 93.5916% ( 5) 00:09:06.824 16.524 - 16.640: 93.6464% ( 5) 00:09:06.824 16.640 - 16.756: 93.7233% ( 7) 00:09:06.824 16.756 - 16.873: 93.7781% ( 5) 00:09:06.824 16.873 - 16.989: 93.8001% ( 2) 00:09:06.824 16.989 - 17.105: 93.8220% ( 2) 00:09:06.824 17.105 - 17.222: 93.8330% ( 1) 00:09:06.824 17.222 - 17.338: 93.8879% ( 5) 00:09:06.824 17.338 - 17.455: 93.9427% ( 5) 00:09:06.824 17.455 - 17.571: 93.9647% ( 2) 00:09:06.824 17.571 - 17.687: 93.9976% ( 3) 00:09:06.824 17.687 - 17.804: 94.0195% ( 2) 00:09:06.825 17.804 - 17.920: 94.1183% ( 9) 00:09:06.825 17.920 - 18.036: 94.2609% ( 13) 00:09:06.825 18.036 - 18.153: 94.3817% ( 11) 00:09:06.825 18.153 - 18.269: 94.4804% ( 9) 00:09:06.825 18.269 - 18.385: 94.5353% ( 5) 00:09:06.825 18.385 - 18.502: 94.6011% ( 6) 00:09:06.825 18.502 - 18.618: 94.6560% ( 5) 00:09:06.825 18.618 - 18.735: 94.6889% ( 3) 00:09:06.825 18.735 - 18.851: 94.7877% ( 9) 00:09:06.825 18.851 - 18.967: 94.8425% ( 5) 00:09:06.825 18.967 - 19.084: 94.8755% ( 3) 00:09:06.825 19.084 - 19.200: 94.9084% ( 3) 00:09:06.825 19.200 - 19.316: 94.9632% ( 5) 00:09:06.825 19.316 - 19.433: 94.9962% ( 3) 00:09:06.825 19.433 - 19.549: 95.0401% ( 4) 00:09:06.825 19.549 - 19.665: 95.0620% ( 2) 00:09:06.825 19.665 - 19.782: 95.0730% ( 1) 00:09:06.825 19.782 - 19.898: 95.1388% ( 6) 00:09:06.825 19.898 - 20.015: 95.2047% ( 6) 00:09:06.825 20.015 - 20.131: 95.2376% ( 3) 00:09:06.825 20.131 - 20.247: 95.2815% ( 4) 00:09:06.825 20.247 - 20.364: 95.3254% ( 4) 00:09:06.825 20.364 - 20.480: 95.4131% ( 8) 00:09:06.825 20.480 - 20.596: 95.4570% ( 4) 00:09:06.825 20.596 - 20.713: 95.5119% ( 5) 00:09:06.825 20.713 - 20.829: 95.5339% ( 2) 00:09:06.825 20.829 - 20.945: 95.5668% ( 3) 00:09:06.825 20.945 - 21.062: 95.5887% ( 2) 00:09:06.825 21.062 - 21.178: 95.6216% ( 3) 00:09:06.825 21.295 - 21.411: 95.6326% ( 1) 00:09:06.825 21.411 - 21.527: 95.7094% ( 7) 00:09:06.825 21.644 - 21.760: 95.7423% ( 3) 00:09:06.825 21.760 - 21.876: 95.7972% ( 5) 00:09:06.825 21.876 - 21.993: 95.8192% ( 2) 00:09:06.825 21.993 - 22.109: 95.8740% ( 5) 00:09:06.825 22.109 - 22.225: 95.8960% ( 2) 00:09:06.825 22.225 - 22.342: 95.9179% ( 2) 00:09:06.825 22.342 - 22.458: 95.9399% ( 2) 00:09:06.825 22.575 - 22.691: 95.9947% ( 5) 00:09:06.825 22.691 - 22.807: 96.0167% ( 2) 00:09:06.825 22.807 - 22.924: 96.0496% ( 3) 00:09:06.825 22.924 - 23.040: 96.0606% ( 1) 00:09:06.825 23.040 - 23.156: 96.1154% ( 5) 00:09:06.825 23.156 - 23.273: 96.1484% ( 3) 00:09:06.825 23.273 - 23.389: 96.1593% ( 1) 00:09:06.825 23.389 - 23.505: 96.2252% ( 6) 00:09:06.825 23.505 - 23.622: 96.2581% ( 3) 00:09:06.825 23.622 - 23.738: 96.2910% ( 3) 00:09:06.825 23.738 - 23.855: 96.3349% ( 4) 00:09:06.825 23.855 - 23.971: 96.3569% ( 2) 00:09:06.825 23.971 - 24.087: 96.3678% ( 1) 00:09:06.825 24.087 - 24.204: 96.3788% ( 1) 00:09:06.825 24.204 - 24.320: 96.3898% ( 1) 00:09:06.825 24.320 - 24.436: 96.4227% ( 3) 00:09:06.825 24.436 - 24.553: 96.4556% ( 3) 00:09:06.825 24.553 - 24.669: 96.4995% ( 4) 00:09:06.825 24.669 - 24.785: 96.5763% ( 7) 00:09:06.825 24.785 - 24.902: 96.6312% ( 5) 00:09:06.825 24.902 - 25.018: 96.7080% ( 7) 00:09:06.825 25.018 - 25.135: 96.7848% ( 7) 00:09:06.825 25.135 - 25.251: 97.0262% ( 22) 00:09:06.825 25.251 - 25.367: 97.2128% ( 17) 00:09:06.825 25.367 - 25.484: 97.4761% ( 24) 00:09:06.825 25.484 - 25.600: 97.8163% ( 31) 00:09:06.825 25.600 - 25.716: 98.0906% ( 25) 00:09:06.825 25.716 - 25.833: 98.3650% ( 25) 00:09:06.825 25.833 - 25.949: 98.7271% ( 33) 00:09:06.825 25.949 - 26.065: 98.8368% ( 10) 00:09:06.825 26.065 - 26.182: 98.9136% ( 7) 00:09:06.825 26.182 - 26.298: 98.9466% ( 3) 00:09:06.825 26.298 - 26.415: 98.9905% ( 4) 00:09:06.825 26.415 - 26.531: 99.0782% ( 8) 00:09:06.825 26.531 - 26.647: 99.1770% ( 9) 00:09:06.825 26.647 - 26.764: 99.2538% ( 7) 00:09:06.825 26.764 - 26.880: 99.3087% ( 5) 00:09:06.825 26.880 - 26.996: 99.3197% ( 1) 00:09:06.825 26.996 - 27.113: 99.3965% ( 7) 00:09:06.825 27.113 - 27.229: 99.4404% ( 4) 00:09:06.825 27.229 - 27.345: 99.4952% ( 5) 00:09:06.825 27.345 - 27.462: 99.5062% ( 1) 00:09:06.825 27.578 - 27.695: 99.5172% ( 1) 00:09:06.825 27.811 - 27.927: 99.5391% ( 2) 00:09:06.825 28.276 - 28.393: 99.5501% ( 1) 00:09:06.825 30.022 - 30.255: 99.5611% ( 1) 00:09:06.825 30.255 - 30.487: 99.5720% ( 1) 00:09:06.825 31.418 - 31.651: 99.5830% ( 1) 00:09:06.825 31.651 - 31.884: 99.5940% ( 1) 00:09:06.825 32.116 - 32.349: 99.6050% ( 1) 00:09:06.825 32.349 - 32.582: 99.6159% ( 1) 00:09:06.825 32.582 - 32.815: 99.6269% ( 1) 00:09:06.825 32.815 - 33.047: 99.6489% ( 2) 00:09:06.825 33.280 - 33.513: 99.6927% ( 4) 00:09:06.825 33.513 - 33.745: 99.7147% ( 2) 00:09:06.825 33.745 - 33.978: 99.7366% ( 2) 00:09:06.825 33.978 - 34.211: 99.7696% ( 3) 00:09:06.825 34.211 - 34.444: 99.7915% ( 2) 00:09:06.825 34.676 - 34.909: 99.8135% ( 2) 00:09:06.825 34.909 - 35.142: 99.8244% ( 1) 00:09:06.825 35.375 - 35.607: 99.8354% ( 1) 00:09:06.825 37.004 - 37.236: 99.8464% ( 1) 00:09:06.825 37.236 - 37.469: 99.8683% ( 2) 00:09:06.825 37.702 - 37.935: 99.8793% ( 1) 00:09:06.825 38.167 - 38.400: 99.8903% ( 1) 00:09:06.825 39.564 - 39.796: 99.9012% ( 1) 00:09:06.825 40.262 - 40.495: 99.9122% ( 1) 00:09:06.825 40.727 - 40.960: 99.9232% ( 1) 00:09:06.825 41.658 - 41.891: 99.9342% ( 1) 00:09:06.825 43.985 - 44.218: 99.9451% ( 1) 00:09:06.825 44.684 - 44.916: 99.9561% ( 1) 00:09:06.825 49.105 - 49.338: 99.9671% ( 1) 00:09:06.825 51.200 - 51.433: 99.9890% ( 2) 00:09:06.825 76.800 - 77.265: 100.0000% ( 1) 00:09:06.825 00:09:06.825 ************************************ 00:09:06.825 END TEST nvme_overhead 00:09:06.825 ************************************ 00:09:06.825 00:09:06.825 real 0m1.278s 00:09:06.825 user 0m1.092s 00:09:06.825 sys 0m0.135s 00:09:06.825 05:55:58 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:06.825 05:55:58 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:09:06.825 05:55:58 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:06.825 05:55:58 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:06.825 05:55:58 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:09:06.825 05:55:58 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.825 05:55:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:06.825 ************************************ 00:09:06.825 START TEST nvme_arbitration 00:09:06.825 ************************************ 00:09:06.825 05:55:58 nvme.nvme_arbitration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:10.109 Initializing NVMe Controllers 00:09:10.109 Attached to 0000:00:10.0 00:09:10.109 Attached to 0000:00:11.0 00:09:10.109 Attached to 0000:00:13.0 00:09:10.109 Attached to 0000:00:12.0 00:09:10.109 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:10.109 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:10.109 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:10.109 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:10.109 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:10.109 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:10.109 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:10.109 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:10.109 Initialization complete. Launching workers. 00:09:10.109 Starting thread on core 1 with urgent priority queue 00:09:10.109 Starting thread on core 2 with urgent priority queue 00:09:10.109 Starting thread on core 3 with urgent priority queue 00:09:10.109 Starting thread on core 0 with urgent priority queue 00:09:10.109 QEMU NVMe Ctrl (12340 ) core 0: 5226.67 IO/s 19.13 secs/100000 ios 00:09:10.109 QEMU NVMe Ctrl (12342 ) core 0: 5226.67 IO/s 19.13 secs/100000 ios 00:09:10.109 QEMU NVMe Ctrl (12341 ) core 1: 5290.67 IO/s 18.90 secs/100000 ios 00:09:10.109 QEMU NVMe Ctrl (12342 ) core 1: 5290.67 IO/s 18.90 secs/100000 ios 00:09:10.109 QEMU NVMe Ctrl (12343 ) core 2: 5269.33 IO/s 18.98 secs/100000 ios 00:09:10.109 QEMU NVMe Ctrl (12342 ) core 3: 5034.67 IO/s 19.86 secs/100000 ios 00:09:10.109 ======================================================== 00:09:10.109 00:09:10.109 00:09:10.109 real 0m3.285s 00:09:10.109 user 0m9.040s 00:09:10.109 sys 0m0.136s 00:09:10.109 05:56:01 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:10.109 05:56:01 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:10.109 ************************************ 00:09:10.109 END TEST nvme_arbitration 00:09:10.109 ************************************ 00:09:10.109 05:56:01 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:10.109 05:56:01 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:10.109 05:56:01 nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:10.109 05:56:01 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.109 05:56:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:10.109 ************************************ 00:09:10.109 START TEST nvme_single_aen 00:09:10.109 ************************************ 00:09:10.109 05:56:01 nvme.nvme_single_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:10.367 Asynchronous Event Request test 00:09:10.367 Attached to 0000:00:10.0 00:09:10.367 Attached to 0000:00:11.0 00:09:10.367 Attached to 0000:00:13.0 00:09:10.367 Attached to 0000:00:12.0 00:09:10.367 Reset controller to setup AER completions for this process 00:09:10.367 Registering asynchronous event callbacks... 00:09:10.367 Getting orig temperature thresholds of all controllers 00:09:10.367 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.367 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.367 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.367 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:10.367 Setting all controllers temperature threshold low to trigger AER 00:09:10.367 Waiting for all controllers temperature threshold to be set lower 00:09:10.367 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.367 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:10.367 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.367 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:10.367 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.367 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:10.367 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:10.367 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:10.367 Waiting for all controllers to trigger AER and reset threshold 00:09:10.367 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.367 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.367 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.367 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.367 Cleaning up... 00:09:10.367 00:09:10.367 real 0m0.251s 00:09:10.367 user 0m0.095s 00:09:10.367 sys 0m0.110s 00:09:10.367 ************************************ 00:09:10.367 END TEST nvme_single_aen 00:09:10.367 ************************************ 00:09:10.367 05:56:01 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:10.367 05:56:01 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:10.367 05:56:02 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:10.367 05:56:02 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:10.367 05:56:02 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:10.367 05:56:02 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.367 05:56:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:10.367 ************************************ 00:09:10.367 START TEST nvme_doorbell_aers 00:09:10.367 ************************************ 00:09:10.367 05:56:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1123 -- # nvme_doorbell_aers 00:09:10.367 05:56:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:10.367 05:56:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:10.367 05:56:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:10.367 05:56:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:10.367 05:56:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:09:10.367 05:56:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:09:10.367 05:56:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:10.367 05:56:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:10.367 05:56:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:09:10.625 05:56:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:09:10.625 05:56:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:10.625 05:56:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:10.625 05:56:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:10.625 [2024-07-13 05:56:02.306976] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80594) is not found. Dropping the request. 00:09:20.596 Executing: test_write_invalid_db 00:09:20.596 Waiting for AER completion... 00:09:20.596 Failure: test_write_invalid_db 00:09:20.596 00:09:20.596 Executing: test_invalid_db_write_overflow_sq 00:09:20.596 Waiting for AER completion... 00:09:20.596 Failure: test_invalid_db_write_overflow_sq 00:09:20.596 00:09:20.596 Executing: test_invalid_db_write_overflow_cq 00:09:20.596 Waiting for AER completion... 00:09:20.596 Failure: test_invalid_db_write_overflow_cq 00:09:20.596 00:09:20.596 05:56:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:20.596 05:56:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:20.855 [2024-07-13 05:56:12.383058] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80594) is not found. Dropping the request. 00:09:30.825 Executing: test_write_invalid_db 00:09:30.825 Waiting for AER completion... 00:09:30.825 Failure: test_write_invalid_db 00:09:30.825 00:09:30.825 Executing: test_invalid_db_write_overflow_sq 00:09:30.825 Waiting for AER completion... 00:09:30.825 Failure: test_invalid_db_write_overflow_sq 00:09:30.825 00:09:30.825 Executing: test_invalid_db_write_overflow_cq 00:09:30.825 Waiting for AER completion... 00:09:30.825 Failure: test_invalid_db_write_overflow_cq 00:09:30.825 00:09:30.825 05:56:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:30.825 05:56:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:30.825 [2024-07-13 05:56:22.420059] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80594) is not found. Dropping the request. 00:09:40.798 Executing: test_write_invalid_db 00:09:40.798 Waiting for AER completion... 00:09:40.798 Failure: test_write_invalid_db 00:09:40.798 00:09:40.798 Executing: test_invalid_db_write_overflow_sq 00:09:40.798 Waiting for AER completion... 00:09:40.798 Failure: test_invalid_db_write_overflow_sq 00:09:40.798 00:09:40.798 Executing: test_invalid_db_write_overflow_cq 00:09:40.798 Waiting for AER completion... 00:09:40.798 Failure: test_invalid_db_write_overflow_cq 00:09:40.798 00:09:40.798 05:56:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:40.798 05:56:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:40.798 [2024-07-13 05:56:32.449993] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80594) is not found. Dropping the request. 00:09:50.769 Executing: test_write_invalid_db 00:09:50.769 Waiting for AER completion... 00:09:50.769 Failure: test_write_invalid_db 00:09:50.769 00:09:50.769 Executing: test_invalid_db_write_overflow_sq 00:09:50.769 Waiting for AER completion... 00:09:50.769 Failure: test_invalid_db_write_overflow_sq 00:09:50.769 00:09:50.769 Executing: test_invalid_db_write_overflow_cq 00:09:50.769 Waiting for AER completion... 00:09:50.769 Failure: test_invalid_db_write_overflow_cq 00:09:50.769 00:09:50.769 00:09:50.769 real 0m40.232s 00:09:50.769 user 0m34.180s 00:09:50.769 sys 0m5.682s 00:09:50.769 05:56:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:50.769 05:56:42 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:50.769 ************************************ 00:09:50.769 END TEST nvme_doorbell_aers 00:09:50.769 ************************************ 00:09:50.769 05:56:42 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:50.769 05:56:42 nvme -- nvme/nvme.sh@97 -- # uname 00:09:50.769 05:56:42 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:50.769 05:56:42 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:50.769 05:56:42 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:09:50.769 05:56:42 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:50.769 05:56:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:50.769 ************************************ 00:09:50.769 START TEST nvme_multi_aen 00:09:50.769 ************************************ 00:09:50.769 05:56:42 nvme.nvme_multi_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:51.027 [2024-07-13 05:56:42.552877] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80594) is not found. Dropping the request. 00:09:51.027 [2024-07-13 05:56:42.552997] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80594) is not found. Dropping the request. 00:09:51.027 [2024-07-13 05:56:42.553027] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80594) is not found. Dropping the request. 00:09:51.027 [2024-07-13 05:56:42.554473] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80594) is not found. Dropping the request. 00:09:51.027 [2024-07-13 05:56:42.554520] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80594) is not found. Dropping the request. 00:09:51.027 [2024-07-13 05:56:42.554539] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80594) is not found. Dropping the request. 00:09:51.027 [2024-07-13 05:56:42.555855] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80594) is not found. Dropping the request. 00:09:51.027 [2024-07-13 05:56:42.555900] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80594) is not found. Dropping the request. 00:09:51.027 [2024-07-13 05:56:42.555918] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80594) is not found. Dropping the request. 00:09:51.027 [2024-07-13 05:56:42.557305] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80594) is not found. Dropping the request. 00:09:51.027 [2024-07-13 05:56:42.557512] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80594) is not found. Dropping the request. 00:09:51.027 [2024-07-13 05:56:42.557666] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 80594) is not found. Dropping the request. 00:09:51.027 Child process pid: 81105 00:09:51.288 [Child] Asynchronous Event Request test 00:09:51.288 [Child] Attached to 0000:00:10.0 00:09:51.288 [Child] Attached to 0000:00:11.0 00:09:51.288 [Child] Attached to 0000:00:13.0 00:09:51.288 [Child] Attached to 0000:00:12.0 00:09:51.288 [Child] Registering asynchronous event callbacks... 00:09:51.288 [Child] Getting orig temperature thresholds of all controllers 00:09:51.288 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:51.288 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:51.288 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:51.288 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:51.288 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:51.288 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:51.288 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:51.288 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:51.288 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:51.288 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:51.288 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:51.288 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:51.288 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:51.288 [Child] Cleaning up... 00:09:51.288 Asynchronous Event Request test 00:09:51.288 Attached to 0000:00:10.0 00:09:51.288 Attached to 0000:00:11.0 00:09:51.288 Attached to 0000:00:13.0 00:09:51.288 Attached to 0000:00:12.0 00:09:51.288 Reset controller to setup AER completions for this process 00:09:51.288 Registering asynchronous event callbacks... 00:09:51.288 Getting orig temperature thresholds of all controllers 00:09:51.288 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:51.288 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:51.288 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:51.288 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:51.288 Setting all controllers temperature threshold low to trigger AER 00:09:51.288 Waiting for all controllers temperature threshold to be set lower 00:09:51.288 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:51.288 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:51.288 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:51.288 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:51.288 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:51.288 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:51.288 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:51.288 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:51.288 Waiting for all controllers to trigger AER and reset threshold 00:09:51.288 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:51.288 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:51.288 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:51.288 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:51.288 Cleaning up... 00:09:51.288 00:09:51.288 real 0m0.527s 00:09:51.288 user 0m0.189s 00:09:51.288 sys 0m0.197s 00:09:51.288 05:56:42 nvme.nvme_multi_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:51.288 05:56:42 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:51.288 ************************************ 00:09:51.288 END TEST nvme_multi_aen 00:09:51.288 ************************************ 00:09:51.288 05:56:42 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:51.288 05:56:42 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:51.288 05:56:42 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:51.288 05:56:42 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:51.288 05:56:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:51.288 ************************************ 00:09:51.288 START TEST nvme_startup 00:09:51.288 ************************************ 00:09:51.288 05:56:42 nvme.nvme_startup -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:51.546 Initializing NVMe Controllers 00:09:51.546 Attached to 0000:00:10.0 00:09:51.546 Attached to 0000:00:11.0 00:09:51.546 Attached to 0000:00:13.0 00:09:51.546 Attached to 0000:00:12.0 00:09:51.546 Initialization complete. 00:09:51.546 Time used:168730.391 (us). 00:09:51.546 ************************************ 00:09:51.546 END TEST nvme_startup 00:09:51.546 ************************************ 00:09:51.546 00:09:51.546 real 0m0.241s 00:09:51.546 user 0m0.090s 00:09:51.546 sys 0m0.104s 00:09:51.546 05:56:43 nvme.nvme_startup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:51.546 05:56:43 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:51.546 05:56:43 nvme -- common/autotest_common.sh@1142 -- # return 0 00:09:51.546 05:56:43 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:51.546 05:56:43 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:51.546 05:56:43 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:51.546 05:56:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:51.546 ************************************ 00:09:51.546 START TEST nvme_multi_secondary 00:09:51.546 ************************************ 00:09:51.546 05:56:43 nvme.nvme_multi_secondary -- common/autotest_common.sh@1123 -- # nvme_multi_secondary 00:09:51.546 05:56:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=81161 00:09:51.546 05:56:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:51.546 05:56:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=81162 00:09:51.546 05:56:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:51.546 05:56:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:54.825 Initializing NVMe Controllers 00:09:54.825 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:54.825 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:54.825 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:54.825 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:54.825 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:54.825 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:54.825 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:54.825 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:54.825 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:54.825 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:54.825 Initialization complete. Launching workers. 00:09:54.825 ======================================================== 00:09:54.825 Latency(us) 00:09:54.825 Device Information : IOPS MiB/s Average min max 00:09:54.825 PCIE (0000:00:10.0) NSID 1 from core 1: 5776.20 22.56 2768.17 1092.32 6290.07 00:09:54.825 PCIE (0000:00:11.0) NSID 1 from core 1: 5776.20 22.56 2769.51 1139.76 6835.56 00:09:54.825 PCIE (0000:00:13.0) NSID 1 from core 1: 5776.20 22.56 2769.51 1145.80 6735.07 00:09:54.825 PCIE (0000:00:12.0) NSID 1 from core 1: 5776.20 22.56 2769.62 1155.39 6165.38 00:09:54.825 PCIE (0000:00:12.0) NSID 2 from core 1: 5776.20 22.56 2769.53 1140.65 5496.29 00:09:54.825 PCIE (0000:00:12.0) NSID 3 from core 1: 5776.20 22.56 2769.66 1144.90 5734.80 00:09:54.825 ======================================================== 00:09:54.825 Total : 34657.20 135.38 2769.33 1092.32 6835.56 00:09:54.825 00:09:54.825 Initializing NVMe Controllers 00:09:54.825 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:54.825 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:54.825 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:54.825 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:54.825 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:54.825 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:54.825 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:54.825 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:54.825 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:54.825 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:54.825 Initialization complete. Launching workers. 00:09:54.825 ======================================================== 00:09:54.825 Latency(us) 00:09:54.825 Device Information : IOPS MiB/s Average min max 00:09:54.825 PCIE (0000:00:10.0) NSID 1 from core 2: 2460.18 9.61 6501.08 1413.25 12646.32 00:09:54.825 PCIE (0000:00:11.0) NSID 1 from core 2: 2460.18 9.61 6498.78 1455.58 13465.42 00:09:54.825 PCIE (0000:00:13.0) NSID 1 from core 2: 2460.18 9.61 6494.42 1574.38 14559.64 00:09:54.825 PCIE (0000:00:12.0) NSID 1 from core 2: 2460.18 9.61 6493.66 1530.13 13763.26 00:09:54.825 PCIE (0000:00:12.0) NSID 2 from core 2: 2460.18 9.61 6493.80 1260.86 13164.65 00:09:54.825 PCIE (0000:00:12.0) NSID 3 from core 2: 2460.18 9.61 6493.73 1113.84 12640.76 00:09:54.825 ======================================================== 00:09:54.825 Total : 14761.07 57.66 6495.91 1113.84 14559.64 00:09:54.825 00:09:55.083 05:56:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 81161 00:09:56.980 Initializing NVMe Controllers 00:09:56.981 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:56.981 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:56.981 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:56.981 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:56.981 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:56.981 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:56.981 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:56.981 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:56.981 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:56.981 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:56.981 Initialization complete. Launching workers. 00:09:56.981 ======================================================== 00:09:56.981 Latency(us) 00:09:56.981 Device Information : IOPS MiB/s Average min max 00:09:56.981 PCIE (0000:00:10.0) NSID 1 from core 0: 8997.61 35.15 1776.77 928.29 5643.01 00:09:56.981 PCIE (0000:00:11.0) NSID 1 from core 0: 8997.61 35.15 1777.68 941.38 5194.03 00:09:56.981 PCIE (0000:00:13.0) NSID 1 from core 0: 8997.61 35.15 1777.59 745.34 5381.09 00:09:56.981 PCIE (0000:00:12.0) NSID 1 from core 0: 8997.61 35.15 1777.48 652.19 5452.54 00:09:56.981 PCIE (0000:00:12.0) NSID 2 from core 0: 8997.61 35.15 1777.37 539.77 5538.73 00:09:56.981 PCIE (0000:00:12.0) NSID 3 from core 0: 8997.61 35.15 1777.27 422.53 5418.53 00:09:56.981 ======================================================== 00:09:56.981 Total : 53985.66 210.88 1777.36 422.53 5643.01 00:09:56.981 00:09:56.981 05:56:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 81162 00:09:56.981 05:56:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=81237 00:09:56.981 05:56:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:56.981 05:56:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=81238 00:09:56.981 05:56:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:56.981 05:56:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:00.265 Initializing NVMe Controllers 00:10:00.265 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:00.265 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:00.265 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:00.265 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:00.265 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:00.265 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:00.265 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:00.265 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:00.265 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:00.265 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:00.265 Initialization complete. Launching workers. 00:10:00.265 ======================================================== 00:10:00.265 Latency(us) 00:10:00.265 Device Information : IOPS MiB/s Average min max 00:10:00.265 PCIE (0000:00:10.0) NSID 1 from core 1: 5921.69 23.13 2700.07 946.16 7681.67 00:10:00.265 PCIE (0000:00:11.0) NSID 1 from core 1: 5921.69 23.13 2701.26 985.19 7190.31 00:10:00.265 PCIE (0000:00:13.0) NSID 1 from core 1: 5921.69 23.13 2701.39 988.88 7038.38 00:10:00.265 PCIE (0000:00:12.0) NSID 1 from core 1: 5921.69 23.13 2701.35 972.35 6301.70 00:10:00.265 PCIE (0000:00:12.0) NSID 2 from core 1: 5921.69 23.13 2701.32 979.45 6840.98 00:10:00.265 PCIE (0000:00:12.0) NSID 3 from core 1: 5921.69 23.13 2701.30 983.41 6944.26 00:10:00.265 ======================================================== 00:10:00.265 Total : 35530.14 138.79 2701.12 946.16 7681.67 00:10:00.265 00:10:00.524 Initializing NVMe Controllers 00:10:00.524 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:00.524 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:00.524 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:00.524 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:00.524 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:00.524 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:00.524 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:00.524 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:00.524 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:00.524 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:00.524 Initialization complete. Launching workers. 00:10:00.524 ======================================================== 00:10:00.524 Latency(us) 00:10:00.524 Device Information : IOPS MiB/s Average min max 00:10:00.524 PCIE (0000:00:10.0) NSID 1 from core 0: 5740.03 22.42 2785.48 984.56 5948.47 00:10:00.524 PCIE (0000:00:11.0) NSID 1 from core 0: 5740.03 22.42 2786.74 1008.83 5983.24 00:10:00.524 PCIE (0000:00:13.0) NSID 1 from core 0: 5740.03 22.42 2786.61 1007.84 5808.09 00:10:00.524 PCIE (0000:00:12.0) NSID 1 from core 0: 5740.03 22.42 2786.37 1000.33 6244.23 00:10:00.524 PCIE (0000:00:12.0) NSID 2 from core 0: 5740.03 22.42 2786.06 1015.38 7092.60 00:10:00.524 PCIE (0000:00:12.0) NSID 3 from core 0: 5740.03 22.42 2785.79 978.59 6898.77 00:10:00.524 ======================================================== 00:10:00.524 Total : 34440.17 134.53 2786.17 978.59 7092.60 00:10:00.524 00:10:02.425 Initializing NVMe Controllers 00:10:02.425 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:02.425 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:02.425 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:02.425 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:02.425 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:02.425 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:02.425 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:02.425 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:02.425 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:02.425 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:02.425 Initialization complete. Launching workers. 00:10:02.425 ======================================================== 00:10:02.425 Latency(us) 00:10:02.425 Device Information : IOPS MiB/s Average min max 00:10:02.425 PCIE (0000:00:10.0) NSID 1 from core 2: 3759.78 14.69 4253.57 970.47 12706.20 00:10:02.425 PCIE (0000:00:11.0) NSID 1 from core 2: 3759.78 14.69 4254.98 967.34 16714.41 00:10:02.425 PCIE (0000:00:13.0) NSID 1 from core 2: 3759.78 14.69 4254.69 968.25 16865.03 00:10:02.425 PCIE (0000:00:12.0) NSID 1 from core 2: 3759.78 14.69 4254.50 863.77 16811.84 00:10:02.425 PCIE (0000:00:12.0) NSID 2 from core 2: 3759.78 14.69 4253.96 682.77 12979.15 00:10:02.425 PCIE (0000:00:12.0) NSID 3 from core 2: 3759.78 14.69 4254.42 584.34 12493.02 00:10:02.425 ======================================================== 00:10:02.425 Total : 22558.66 88.12 4254.36 584.34 16865.03 00:10:02.425 00:10:02.425 05:56:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 81237 00:10:02.425 05:56:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 81238 00:10:02.425 00:10:02.425 real 0m10.911s 00:10:02.425 user 0m18.399s 00:10:02.425 sys 0m0.790s 00:10:02.425 05:56:54 nvme.nvme_multi_secondary -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:02.425 ************************************ 00:10:02.425 END TEST nvme_multi_secondary 00:10:02.425 05:56:54 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:10:02.425 ************************************ 00:10:02.425 05:56:54 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:02.425 05:56:54 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:02.425 05:56:54 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:10:02.425 05:56:54 nvme -- common/autotest_common.sh@1087 -- # [[ -e /proc/80192 ]] 00:10:02.425 05:56:54 nvme -- common/autotest_common.sh@1088 -- # kill 80192 00:10:02.425 05:56:54 nvme -- common/autotest_common.sh@1089 -- # wait 80192 00:10:02.425 [2024-07-13 05:56:54.138744] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:02.425 [2024-07-13 05:56:54.138817] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:02.425 [2024-07-13 05:56:54.138842] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:02.425 [2024-07-13 05:56:54.138865] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:02.425 [2024-07-13 05:56:54.139470] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:02.425 [2024-07-13 05:56:54.139523] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:02.425 [2024-07-13 05:56:54.139544] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:02.425 [2024-07-13 05:56:54.139564] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:02.425 [2024-07-13 05:56:54.140111] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:02.425 [2024-07-13 05:56:54.140176] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:02.425 [2024-07-13 05:56:54.140198] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:02.425 [2024-07-13 05:56:54.140221] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:02.425 [2024-07-13 05:56:54.140836] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:02.425 [2024-07-13 05:56:54.140889] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:02.425 [2024-07-13 05:56:54.140911] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:02.425 [2024-07-13 05:56:54.140931] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81104) is not found. Dropping the request. 00:10:02.684 05:56:54 nvme -- common/autotest_common.sh@1091 -- # rm -f /var/run/spdk_stub0 00:10:02.684 05:56:54 nvme -- common/autotest_common.sh@1095 -- # echo 2 00:10:02.684 05:56:54 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:02.684 05:56:54 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:02.684 05:56:54 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.684 05:56:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:02.684 ************************************ 00:10:02.684 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:02.684 ************************************ 00:10:02.684 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:02.684 * Looking for test storage... 00:10:02.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:02.684 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:02.684 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:02.684 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:02.684 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:02.684 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:02.684 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:02.684 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:10:02.684 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:10:02.684 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:10:02.684 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:10:02.684 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:10:02.684 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:10:02.684 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:02.684 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:02.684 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:10:02.684 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:10:02.684 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:02.684 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:10:02.684 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:10:02.684 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:10:02.684 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=81385 00:10:02.684 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:02.684 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:02.942 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 81385 00:10:02.942 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@829 -- # '[' -z 81385 ']' 00:10:02.942 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.942 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:02.942 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.942 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:02.942 05:56:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:02.942 [2024-07-13 05:56:54.536213] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:02.942 [2024-07-13 05:56:54.536378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81385 ] 00:10:03.201 [2024-07-13 05:56:54.704957] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:03.201 [2024-07-13 05:56:54.748105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.201 [2024-07-13 05:56:54.748272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:03.201 [2024-07-13 05:56:54.748314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.201 [2024-07-13 05:56:54.748379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.768 05:56:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:03.768 05:56:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # return 0 00:10:03.768 05:56:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:10:03.768 05:56:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.768 05:56:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:03.768 nvme0n1 00:10:03.768 05:56:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.027 05:56:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:04.027 05:56:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_lE9yP.txt 00:10:04.027 05:56:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:04.027 05:56:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:04.027 05:56:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:04.027 true 00:10:04.027 05:56:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.027 05:56:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:04.027 05:56:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1720850215 00:10:04.027 05:56:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=81408 00:10:04.027 05:56:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:04.027 05:56:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:04.027 05:56:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:05.932 [2024-07-13 05:56:57.523647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:10:05.932 [2024-07-13 05:56:57.524108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:05.932 [2024-07-13 05:56:57.524179] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:05.932 [2024-07-13 05:56:57.524203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:05.932 [2024-07-13 05:56:57.526324] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.932 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 81408 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 81408 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 81408 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_lE9yP.txt 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_lE9yP.txt 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 81385 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@948 -- # '[' -z 81385 ']' 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # kill -0 81385 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # uname 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:05.932 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81385 00:10:06.191 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:06.191 killing process with pid 81385 00:10:06.191 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:06.191 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81385' 00:10:06.191 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # kill 81385 00:10:06.191 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # wait 81385 00:10:06.451 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:06.451 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:06.451 ************************************ 00:10:06.451 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:06.451 ************************************ 00:10:06.451 00:10:06.451 real 0m3.724s 00:10:06.451 user 0m13.305s 00:10:06.451 sys 0m0.543s 00:10:06.451 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:06.451 05:56:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:06.451 05:56:58 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:06.451 05:56:58 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:06.451 05:56:58 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:06.451 05:56:58 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:06.451 05:56:58 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:06.451 05:56:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:06.451 ************************************ 00:10:06.451 START TEST nvme_fio 00:10:06.451 ************************************ 00:10:06.451 05:56:58 nvme.nvme_fio -- common/autotest_common.sh@1123 -- # nvme_fio_test 00:10:06.451 05:56:58 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:06.451 05:56:58 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:06.451 05:56:58 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:06.451 05:56:58 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:10:06.451 05:56:58 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:10:06.451 05:56:58 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:06.451 05:56:58 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:06.451 05:56:58 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:10:06.451 05:56:58 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:10:06.451 05:56:58 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:06.451 05:56:58 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:06.451 05:56:58 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:06.451 05:56:58 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:06.451 05:56:58 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:06.451 05:56:58 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:06.709 05:56:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:06.709 05:56:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:06.969 05:56:58 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:06.969 05:56:58 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:06.969 05:56:58 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:06.969 05:56:58 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:06.969 05:56:58 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:06.969 05:56:58 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:06.969 05:56:58 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:06.969 05:56:58 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:06.969 05:56:58 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:06.969 05:56:58 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:06.969 05:56:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:06.969 05:56:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:06.969 05:56:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:06.969 05:56:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:06.969 05:56:58 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:06.969 05:56:58 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:06.969 05:56:58 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:06.969 05:56:58 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:07.227 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:07.227 fio-3.35 00:10:07.227 Starting 1 thread 00:10:10.515 00:10:10.515 test: (groupid=0, jobs=1): err= 0: pid=81542: Sat Jul 13 05:57:01 2024 00:10:10.515 read: IOPS=15.6k, BW=60.8MiB/s (63.8MB/s)(122MiB/2001msec) 00:10:10.515 slat (nsec): min=4309, max=78663, avg=6331.51, stdev=2339.28 00:10:10.515 clat (usec): min=316, max=9871, avg=4094.31, stdev=540.24 00:10:10.515 lat (usec): min=321, max=9949, avg=4100.64, stdev=540.98 00:10:10.515 clat percentiles (usec): 00:10:10.515 | 1.00th=[ 3294], 5.00th=[ 3490], 10.00th=[ 3556], 20.00th=[ 3687], 00:10:10.515 | 30.00th=[ 3752], 40.00th=[ 3884], 50.00th=[ 4015], 60.00th=[ 4228], 00:10:10.515 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 4817], 00:10:10.515 | 99.00th=[ 5407], 99.50th=[ 7242], 99.90th=[ 9110], 99.95th=[ 9241], 00:10:10.515 | 99.99th=[ 9765] 00:10:10.515 bw ( KiB/s): min=60032, max=62296, per=98.82%, avg=61525.33, stdev=1293.49, samples=3 00:10:10.515 iops : min=15008, max=15574, avg=15381.33, stdev=323.37, samples=3 00:10:10.515 write: IOPS=15.6k, BW=60.8MiB/s (63.8MB/s)(122MiB/2001msec); 0 zone resets 00:10:10.515 slat (nsec): min=4401, max=48899, avg=6464.18, stdev=2355.14 00:10:10.515 clat (usec): min=298, max=9738, avg=4099.65, stdev=539.78 00:10:10.515 lat (usec): min=303, max=9756, avg=4106.11, stdev=540.52 00:10:10.515 clat percentiles (usec): 00:10:10.515 | 1.00th=[ 3294], 5.00th=[ 3490], 10.00th=[ 3589], 20.00th=[ 3687], 00:10:10.515 | 30.00th=[ 3785], 40.00th=[ 3884], 50.00th=[ 4015], 60.00th=[ 4228], 00:10:10.515 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 4817], 00:10:10.515 | 99.00th=[ 5407], 99.50th=[ 7046], 99.90th=[ 8979], 99.95th=[ 9241], 00:10:10.515 | 99.99th=[ 9503] 00:10:10.515 bw ( KiB/s): min=58928, max=62736, per=98.02%, avg=61077.33, stdev=1950.84, samples=3 00:10:10.515 iops : min=14732, max=15684, avg=15269.33, stdev=487.71, samples=3 00:10:10.515 lat (usec) : 500=0.02%, 750=0.02%, 1000=0.02% 00:10:10.515 lat (msec) : 2=0.05%, 4=48.78%, 10=51.12% 00:10:10.515 cpu : usr=98.90%, sys=0.20%, ctx=5, majf=0, minf=625 00:10:10.515 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:10.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:10.515 issued rwts: total=31145,31171,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:10.515 00:10:10.515 Run status group 0 (all jobs): 00:10:10.515 READ: bw=60.8MiB/s (63.8MB/s), 60.8MiB/s-60.8MiB/s (63.8MB/s-63.8MB/s), io=122MiB (128MB), run=2001-2001msec 00:10:10.515 WRITE: bw=60.8MiB/s (63.8MB/s), 60.8MiB/s-60.8MiB/s (63.8MB/s-63.8MB/s), io=122MiB (128MB), run=2001-2001msec 00:10:10.515 ----------------------------------------------------- 00:10:10.515 Suppressions used: 00:10:10.515 count bytes template 00:10:10.515 1 32 /usr/src/fio/parse.c 00:10:10.515 1 8 libtcmalloc_minimal.so 00:10:10.515 ----------------------------------------------------- 00:10:10.515 00:10:10.515 05:57:01 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:10.515 05:57:01 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:10.515 05:57:01 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:10.515 05:57:01 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:10.515 05:57:02 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:10.515 05:57:02 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:10.773 05:57:02 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:10.773 05:57:02 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:10.773 05:57:02 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:10.773 05:57:02 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:10.773 05:57:02 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:10.773 05:57:02 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:10.773 05:57:02 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:10.773 05:57:02 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:10.773 05:57:02 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:10.773 05:57:02 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:10.773 05:57:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:10.773 05:57:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:10.773 05:57:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:10.773 05:57:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:10.773 05:57:02 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:10.773 05:57:02 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:10.773 05:57:02 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:10.773 05:57:02 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:10.773 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:10.773 fio-3.35 00:10:10.773 Starting 1 thread 00:10:14.053 00:10:14.053 test: (groupid=0, jobs=1): err= 0: pid=81594: Sat Jul 13 05:57:05 2024 00:10:14.053 read: IOPS=15.6k, BW=61.0MiB/s (64.0MB/s)(122MiB/2001msec) 00:10:14.053 slat (nsec): min=4456, max=49449, avg=6147.86, stdev=2413.47 00:10:14.053 clat (usec): min=261, max=9647, avg=4076.90, stdev=530.65 00:10:14.053 lat (usec): min=266, max=9696, avg=4083.04, stdev=531.35 00:10:14.053 clat percentiles (usec): 00:10:14.053 | 1.00th=[ 3425], 5.00th=[ 3523], 10.00th=[ 3589], 20.00th=[ 3654], 00:10:14.053 | 30.00th=[ 3720], 40.00th=[ 3785], 50.00th=[ 3884], 60.00th=[ 4080], 00:10:14.053 | 70.00th=[ 4293], 80.00th=[ 4555], 90.00th=[ 4752], 95.00th=[ 4948], 00:10:14.053 | 99.00th=[ 5342], 99.50th=[ 6194], 99.90th=[ 7701], 99.95th=[ 8160], 00:10:14.053 | 99.99th=[ 9503] 00:10:14.053 bw ( KiB/s): min=56408, max=68480, per=100.00%, avg=62906.67, stdev=6088.96, samples=3 00:10:14.053 iops : min=14102, max=17120, avg=15726.67, stdev=1522.24, samples=3 00:10:14.053 write: IOPS=15.6k, BW=61.1MiB/s (64.0MB/s)(122MiB/2001msec); 0 zone resets 00:10:14.053 slat (nsec): min=4523, max=57319, avg=6296.58, stdev=2370.20 00:10:14.053 clat (usec): min=228, max=9529, avg=4086.55, stdev=534.92 00:10:14.053 lat (usec): min=234, max=9547, avg=4092.85, stdev=535.60 00:10:14.053 clat percentiles (usec): 00:10:14.053 | 1.00th=[ 3425], 5.00th=[ 3556], 10.00th=[ 3589], 20.00th=[ 3687], 00:10:14.053 | 30.00th=[ 3752], 40.00th=[ 3818], 50.00th=[ 3916], 60.00th=[ 4080], 00:10:14.053 | 70.00th=[ 4293], 80.00th=[ 4555], 90.00th=[ 4752], 95.00th=[ 4948], 00:10:14.053 | 99.00th=[ 5407], 99.50th=[ 6456], 99.90th=[ 7767], 99.95th=[ 8291], 00:10:14.053 | 99.99th=[ 9372] 00:10:14.053 bw ( KiB/s): min=55392, max=68280, per=100.00%, avg=62581.00, stdev=6571.93, samples=3 00:10:14.053 iops : min=13848, max=17070, avg=15645.00, stdev=1642.90, samples=3 00:10:14.053 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:10:14.053 lat (msec) : 2=0.06%, 4=55.66%, 10=44.24% 00:10:14.053 cpu : usr=99.00%, sys=0.05%, ctx=9, majf=0, minf=626 00:10:14.053 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:14.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.053 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.053 issued rwts: total=31268,31286,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.053 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.053 00:10:14.053 Run status group 0 (all jobs): 00:10:14.053 READ: bw=61.0MiB/s (64.0MB/s), 61.0MiB/s-61.0MiB/s (64.0MB/s-64.0MB/s), io=122MiB (128MB), run=2001-2001msec 00:10:14.053 WRITE: bw=61.1MiB/s (64.0MB/s), 61.1MiB/s-61.1MiB/s (64.0MB/s-64.0MB/s), io=122MiB (128MB), run=2001-2001msec 00:10:14.312 ----------------------------------------------------- 00:10:14.312 Suppressions used: 00:10:14.312 count bytes template 00:10:14.312 1 32 /usr/src/fio/parse.c 00:10:14.312 1 8 libtcmalloc_minimal.so 00:10:14.312 ----------------------------------------------------- 00:10:14.312 00:10:14.312 05:57:05 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:14.312 05:57:05 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:14.312 05:57:05 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:14.312 05:57:05 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:14.571 05:57:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:14.571 05:57:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:14.829 05:57:06 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:14.829 05:57:06 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:14.829 05:57:06 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:14.829 05:57:06 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:14.829 05:57:06 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:14.829 05:57:06 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:14.829 05:57:06 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:14.829 05:57:06 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:14.829 05:57:06 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:14.829 05:57:06 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:14.829 05:57:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:14.829 05:57:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:14.829 05:57:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:14.829 05:57:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:14.829 05:57:06 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:14.829 05:57:06 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:14.829 05:57:06 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:14.829 05:57:06 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:14.829 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:14.829 fio-3.35 00:10:14.829 Starting 1 thread 00:10:18.109 00:10:18.109 test: (groupid=0, jobs=1): err= 0: pid=81655: Sat Jul 13 05:57:09 2024 00:10:18.109 read: IOPS=14.6k, BW=57.2MiB/s (60.0MB/s)(115MiB/2001msec) 00:10:18.109 slat (nsec): min=4389, max=66298, avg=6560.62, stdev=3176.97 00:10:18.109 clat (usec): min=300, max=10732, avg=4346.67, stdev=462.59 00:10:18.109 lat (usec): min=307, max=10784, avg=4353.23, stdev=463.13 00:10:18.109 clat percentiles (usec): 00:10:18.109 | 1.00th=[ 3621], 5.00th=[ 3785], 10.00th=[ 3851], 20.00th=[ 3982], 00:10:18.109 | 30.00th=[ 4113], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4424], 00:10:18.109 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 4817], 95.00th=[ 4948], 00:10:18.109 | 99.00th=[ 5276], 99.50th=[ 5800], 99.90th=[ 8717], 99.95th=[ 9372], 00:10:18.109 | 99.99th=[10683] 00:10:18.109 bw ( KiB/s): min=55656, max=61336, per=100.00%, avg=59069.33, stdev=3008.61, samples=3 00:10:18.109 iops : min=13914, max=15334, avg=14767.33, stdev=752.15, samples=3 00:10:18.109 write: IOPS=14.7k, BW=57.3MiB/s (60.1MB/s)(115MiB/2001msec); 0 zone resets 00:10:18.109 slat (nsec): min=4422, max=63974, avg=6721.15, stdev=3221.92 00:10:18.109 clat (usec): min=366, max=10597, avg=4357.23, stdev=457.12 00:10:18.109 lat (usec): min=374, max=10620, avg=4363.95, stdev=457.64 00:10:18.109 clat percentiles (usec): 00:10:18.109 | 1.00th=[ 3654], 5.00th=[ 3785], 10.00th=[ 3884], 20.00th=[ 3982], 00:10:18.109 | 30.00th=[ 4113], 40.00th=[ 4228], 50.00th=[ 4359], 60.00th=[ 4424], 00:10:18.109 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 4883], 95.00th=[ 5014], 00:10:18.109 | 99.00th=[ 5276], 99.50th=[ 5866], 99.90th=[ 8717], 99.95th=[ 9634], 00:10:18.109 | 99.99th=[10290] 00:10:18.109 bw ( KiB/s): min=55920, max=61360, per=100.00%, avg=58904.00, stdev=2758.17, samples=3 00:10:18.109 iops : min=13980, max=15340, avg=14726.00, stdev=689.54, samples=3 00:10:18.109 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:18.109 lat (msec) : 2=0.05%, 4=20.49%, 10=79.41%, 20=0.02% 00:10:18.109 cpu : usr=98.85%, sys=0.10%, ctx=5, majf=0, minf=626 00:10:18.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:18.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:18.109 issued rwts: total=29313,29359,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:18.109 00:10:18.109 Run status group 0 (all jobs): 00:10:18.109 READ: bw=57.2MiB/s (60.0MB/s), 57.2MiB/s-57.2MiB/s (60.0MB/s-60.0MB/s), io=115MiB (120MB), run=2001-2001msec 00:10:18.109 WRITE: bw=57.3MiB/s (60.1MB/s), 57.3MiB/s-57.3MiB/s (60.1MB/s-60.1MB/s), io=115MiB (120MB), run=2001-2001msec 00:10:18.109 ----------------------------------------------------- 00:10:18.109 Suppressions used: 00:10:18.109 count bytes template 00:10:18.109 1 32 /usr/src/fio/parse.c 00:10:18.109 1 8 libtcmalloc_minimal.so 00:10:18.109 ----------------------------------------------------- 00:10:18.109 00:10:18.109 05:57:09 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:18.109 05:57:09 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:18.109 05:57:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:18.109 05:57:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:18.368 05:57:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:18.368 05:57:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:18.627 05:57:10 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:18.627 05:57:10 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:18.627 05:57:10 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:18.627 05:57:10 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:18.627 05:57:10 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:18.627 05:57:10 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:18.627 05:57:10 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:18.627 05:57:10 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:18.627 05:57:10 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:18.627 05:57:10 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:18.627 05:57:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:18.627 05:57:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:18.627 05:57:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:18.886 05:57:10 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:18.886 05:57:10 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:18.886 05:57:10 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:18.886 05:57:10 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:18.886 05:57:10 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:18.886 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:18.886 fio-3.35 00:10:18.886 Starting 1 thread 00:10:22.171 00:10:22.171 test: (groupid=0, jobs=1): err= 0: pid=81715: Sat Jul 13 05:57:13 2024 00:10:22.171 read: IOPS=16.7k, BW=65.2MiB/s (68.4MB/s)(131MiB/2001msec) 00:10:22.171 slat (nsec): min=4342, max=53659, avg=5830.05, stdev=2176.55 00:10:22.171 clat (usec): min=245, max=10907, avg=3807.87, stdev=445.75 00:10:22.171 lat (usec): min=250, max=10961, avg=3813.70, stdev=446.35 00:10:22.171 clat percentiles (usec): 00:10:22.171 | 1.00th=[ 3294], 5.00th=[ 3392], 10.00th=[ 3458], 20.00th=[ 3523], 00:10:22.171 | 30.00th=[ 3589], 40.00th=[ 3654], 50.00th=[ 3720], 60.00th=[ 3785], 00:10:22.171 | 70.00th=[ 3884], 80.00th=[ 4015], 90.00th=[ 4359], 95.00th=[ 4490], 00:10:22.171 | 99.00th=[ 4948], 99.50th=[ 5669], 99.90th=[ 8455], 99.95th=[ 9241], 00:10:22.171 | 99.99th=[10683] 00:10:22.172 bw ( KiB/s): min=61264, max=70096, per=99.52%, avg=66482.67, stdev=4629.67, samples=3 00:10:22.172 iops : min=15316, max=17524, avg=16620.67, stdev=1157.42, samples=3 00:10:22.172 write: IOPS=16.7k, BW=65.4MiB/s (68.5MB/s)(131MiB/2001msec); 0 zone resets 00:10:22.172 slat (nsec): min=4489, max=51785, avg=6013.24, stdev=2230.55 00:10:22.172 clat (usec): min=317, max=10771, avg=3825.24, stdev=453.58 00:10:22.172 lat (usec): min=323, max=10788, avg=3831.25, stdev=454.18 00:10:22.172 clat percentiles (usec): 00:10:22.172 | 1.00th=[ 3294], 5.00th=[ 3392], 10.00th=[ 3458], 20.00th=[ 3556], 00:10:22.172 | 30.00th=[ 3589], 40.00th=[ 3654], 50.00th=[ 3720], 60.00th=[ 3785], 00:10:22.172 | 70.00th=[ 3884], 80.00th=[ 4047], 90.00th=[ 4359], 95.00th=[ 4555], 00:10:22.172 | 99.00th=[ 4948], 99.50th=[ 5932], 99.90th=[ 8455], 99.95th=[ 9503], 00:10:22.172 | 99.99th=[10421] 00:10:22.172 bw ( KiB/s): min=61656, max=69824, per=99.20%, avg=66405.33, stdev=4243.47, samples=3 00:10:22.172 iops : min=15414, max=17456, avg=16601.33, stdev=1060.87, samples=3 00:10:22.172 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:22.172 lat (msec) : 2=0.05%, 4=77.92%, 10=21.96%, 20=0.03% 00:10:22.172 cpu : usr=98.95%, sys=0.10%, ctx=4, majf=0, minf=623 00:10:22.172 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:22.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:22.172 issued rwts: total=33417,33488,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.172 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:22.172 00:10:22.172 Run status group 0 (all jobs): 00:10:22.172 READ: bw=65.2MiB/s (68.4MB/s), 65.2MiB/s-65.2MiB/s (68.4MB/s-68.4MB/s), io=131MiB (137MB), run=2001-2001msec 00:10:22.172 WRITE: bw=65.4MiB/s (68.5MB/s), 65.4MiB/s-65.4MiB/s (68.5MB/s-68.5MB/s), io=131MiB (137MB), run=2001-2001msec 00:10:22.172 ----------------------------------------------------- 00:10:22.172 Suppressions used: 00:10:22.172 count bytes template 00:10:22.172 1 32 /usr/src/fio/parse.c 00:10:22.172 1 8 libtcmalloc_minimal.so 00:10:22.172 ----------------------------------------------------- 00:10:22.172 00:10:22.172 05:57:13 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:22.172 05:57:13 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:22.172 00:10:22.172 real 0m15.815s 00:10:22.172 user 0m12.924s 00:10:22.172 sys 0m1.289s 00:10:22.172 05:57:13 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:22.172 05:57:13 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:22.172 ************************************ 00:10:22.172 END TEST nvme_fio 00:10:22.172 ************************************ 00:10:22.172 05:57:13 nvme -- common/autotest_common.sh@1142 -- # return 0 00:10:22.172 00:10:22.172 real 1m25.131s 00:10:22.172 user 3m32.400s 00:10:22.172 sys 0m12.471s 00:10:22.172 05:57:13 nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:22.172 05:57:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:22.172 ************************************ 00:10:22.172 END TEST nvme 00:10:22.172 ************************************ 00:10:22.431 05:57:13 -- common/autotest_common.sh@1142 -- # return 0 00:10:22.431 05:57:13 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:10:22.431 05:57:13 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:22.431 05:57:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:22.431 05:57:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:22.431 05:57:13 -- common/autotest_common.sh@10 -- # set +x 00:10:22.431 ************************************ 00:10:22.431 START TEST nvme_scc 00:10:22.431 ************************************ 00:10:22.431 05:57:13 nvme_scc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:22.431 * Looking for test storage... 00:10:22.431 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:22.431 05:57:14 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:22.431 05:57:14 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:22.431 05:57:14 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:22.431 05:57:14 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:22.431 05:57:14 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:22.431 05:57:14 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.431 05:57:14 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.431 05:57:14 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.431 05:57:14 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.431 05:57:14 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.431 05:57:14 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.431 05:57:14 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:22.431 05:57:14 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.431 05:57:14 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:22.431 05:57:14 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:22.431 05:57:14 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:22.431 05:57:14 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:22.432 05:57:14 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:22.432 05:57:14 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:22.432 05:57:14 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:22.432 05:57:14 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:22.432 05:57:14 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:22.432 05:57:14 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:22.432 05:57:14 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:22.432 05:57:14 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:22.432 05:57:14 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:22.432 05:57:14 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:22.691 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:22.949 Waiting for block devices as requested 00:10:22.949 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:23.208 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:23.208 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:23.208 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:28.487 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:28.487 05:57:19 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:28.487 05:57:19 nvme_scc -- scripts/common.sh@15 -- # local i 00:10:28.487 05:57:19 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:10:28.487 05:57:19 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:28.487 05:57:19 nvme_scc -- scripts/common.sh@24 -- # return 0 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.487 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:28.488 05:57:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:28.488 05:57:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:28.488 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.488 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:28.489 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:28.490 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:28.491 05:57:20 nvme_scc -- scripts/common.sh@15 -- # local i 00:10:28.491 05:57:20 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:10:28.491 05:57:20 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:28.491 05:57:20 nvme_scc -- scripts/common.sh@24 -- # return 0 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:28.491 05:57:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:28.492 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:28.493 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.494 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:28.495 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:28.496 05:57:20 nvme_scc -- scripts/common.sh@15 -- # local i 00:10:28.496 05:57:20 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:10:28.496 05:57:20 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:28.496 05:57:20 nvme_scc -- scripts/common.sh@24 -- # return 0 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.496 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.497 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.498 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:28.498 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:28.498 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.498 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.498 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.498 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:28.498 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:28.498 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.498 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.498 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.498 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:28.498 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:28.498 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.498 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.498 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.498 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:28.498 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:28.498 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.498 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.498 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.498 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:28.498 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:28.498 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.498 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.498 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.498 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.762 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.763 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:28.764 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:28.765 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.766 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:28.767 05:57:20 nvme_scc -- scripts/common.sh@15 -- # local i 00:10:28.767 05:57:20 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:10:28.767 05:57:20 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:28.767 05:57:20 nvme_scc -- scripts/common.sh@24 -- # return 0 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:28.767 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.768 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:28.769 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:28.770 05:57:20 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:10:28.770 05:57:20 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:10:28.771 05:57:20 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:10:28.771 05:57:20 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:28.771 05:57:20 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:28.771 05:57:20 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:29.339 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:29.906 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:29.906 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:29.906 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:29.906 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:29.906 05:57:21 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:29.906 05:57:21 nvme_scc -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:29.906 05:57:21 nvme_scc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.906 05:57:21 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:29.906 ************************************ 00:10:29.906 START TEST nvme_simple_copy 00:10:29.906 ************************************ 00:10:29.906 05:57:21 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:30.165 Initializing NVMe Controllers 00:10:30.165 Attaching to 0000:00:10.0 00:10:30.165 Controller supports SCC. Attached to 0000:00:10.0 00:10:30.165 Namespace ID: 1 size: 6GB 00:10:30.165 Initialization complete. 00:10:30.165 00:10:30.165 Controller QEMU NVMe Ctrl (12340 ) 00:10:30.165 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:30.165 Namespace Block Size:4096 00:10:30.165 Writing LBAs 0 to 63 with Random Data 00:10:30.165 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:30.165 LBAs matching Written Data: 64 00:10:30.165 00:10:30.165 real 0m0.265s 00:10:30.165 user 0m0.098s 00:10:30.165 sys 0m0.065s 00:10:30.165 05:57:21 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:30.165 05:57:21 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:30.165 ************************************ 00:10:30.165 END TEST nvme_simple_copy 00:10:30.165 ************************************ 00:10:30.424 05:57:21 nvme_scc -- common/autotest_common.sh@1142 -- # return 0 00:10:30.424 00:10:30.424 real 0m7.988s 00:10:30.424 user 0m1.272s 00:10:30.424 sys 0m1.616s 00:10:30.424 05:57:21 nvme_scc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:30.424 ************************************ 00:10:30.424 END TEST nvme_scc 00:10:30.424 ************************************ 00:10:30.424 05:57:21 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:30.424 05:57:21 -- common/autotest_common.sh@1142 -- # return 0 00:10:30.424 05:57:21 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:10:30.424 05:57:21 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:10:30.424 05:57:21 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:10:30.424 05:57:21 -- spdk/autotest.sh@232 -- # [[ 1 -eq 1 ]] 00:10:30.424 05:57:21 -- spdk/autotest.sh@233 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:30.424 05:57:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:30.424 05:57:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.424 05:57:21 -- common/autotest_common.sh@10 -- # set +x 00:10:30.424 ************************************ 00:10:30.424 START TEST nvme_fdp 00:10:30.424 ************************************ 00:10:30.424 05:57:21 nvme_fdp -- common/autotest_common.sh@1123 -- # test/nvme/nvme_fdp.sh 00:10:30.424 * Looking for test storage... 00:10:30.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:30.424 05:57:22 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:30.424 05:57:22 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:30.424 05:57:22 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:30.424 05:57:22 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:30.424 05:57:22 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:30.424 05:57:22 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.424 05:57:22 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.424 05:57:22 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.424 05:57:22 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.424 05:57:22 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.424 05:57:22 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.424 05:57:22 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:30.424 05:57:22 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.424 05:57:22 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:30.425 05:57:22 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:30.425 05:57:22 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:30.425 05:57:22 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:30.425 05:57:22 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:30.425 05:57:22 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:30.425 05:57:22 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:30.425 05:57:22 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:30.425 05:57:22 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:30.425 05:57:22 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:30.425 05:57:22 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:30.992 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:30.992 Waiting for block devices as requested 00:10:30.992 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:30.992 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:31.251 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:31.251 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:36.550 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:36.550 05:57:27 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:36.550 05:57:27 nvme_fdp -- scripts/common.sh@15 -- # local i 00:10:36.550 05:57:27 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:10:36.550 05:57:27 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:36.550 05:57:27 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.550 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:36.551 05:57:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.551 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.552 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.553 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:36.554 05:57:28 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:36.554 05:57:28 nvme_fdp -- scripts/common.sh@15 -- # local i 00:10:36.554 05:57:28 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:10:36.555 05:57:28 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:36.555 05:57:28 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.555 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.556 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:36.557 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.558 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:36.559 05:57:28 nvme_fdp -- scripts/common.sh@15 -- # local i 00:10:36.559 05:57:28 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:10:36.559 05:57:28 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:36.559 05:57:28 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:36.559 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:36.560 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.561 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:36.562 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.563 05:57:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:36.829 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:36.830 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.831 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:36.832 05:57:28 nvme_fdp -- scripts/common.sh@15 -- # local i 00:10:36.832 05:57:28 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:10:36.832 05:57:28 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:36.832 05:57:28 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:36.832 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:36.833 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.834 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:36.835 05:57:28 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:10:36.835 05:57:28 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:10:36.835 05:57:28 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:36.835 05:57:28 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:36.835 05:57:28 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:37.402 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:37.969 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:37.969 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:37.969 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:37.969 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:37.969 05:57:29 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:37.969 05:57:29 nvme_fdp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:37.969 05:57:29 nvme_fdp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.969 05:57:29 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:37.969 ************************************ 00:10:37.969 START TEST nvme_flexible_data_placement 00:10:37.969 ************************************ 00:10:37.969 05:57:29 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:38.228 Initializing NVMe Controllers 00:10:38.228 Attaching to 0000:00:13.0 00:10:38.228 Controller supports FDP Attached to 0000:00:13.0 00:10:38.228 Namespace ID: 1 Endurance Group ID: 1 00:10:38.228 Initialization complete. 00:10:38.228 00:10:38.228 ================================== 00:10:38.228 == FDP tests for Namespace: #01 == 00:10:38.228 ================================== 00:10:38.228 00:10:38.228 Get Feature: FDP: 00:10:38.228 ================= 00:10:38.228 Enabled: Yes 00:10:38.228 FDP configuration Index: 0 00:10:38.228 00:10:38.228 FDP configurations log page 00:10:38.228 =========================== 00:10:38.228 Number of FDP configurations: 1 00:10:38.228 Version: 0 00:10:38.228 Size: 112 00:10:38.228 FDP Configuration Descriptor: 0 00:10:38.228 Descriptor Size: 96 00:10:38.228 Reclaim Group Identifier format: 2 00:10:38.228 FDP Volatile Write Cache: Not Present 00:10:38.228 FDP Configuration: Valid 00:10:38.228 Vendor Specific Size: 0 00:10:38.228 Number of Reclaim Groups: 2 00:10:38.228 Number of Recalim Unit Handles: 8 00:10:38.228 Max Placement Identifiers: 128 00:10:38.228 Number of Namespaces Suppprted: 256 00:10:38.228 Reclaim unit Nominal Size: 6000000 bytes 00:10:38.228 Estimated Reclaim Unit Time Limit: Not Reported 00:10:38.228 RUH Desc #000: RUH Type: Initially Isolated 00:10:38.228 RUH Desc #001: RUH Type: Initially Isolated 00:10:38.228 RUH Desc #002: RUH Type: Initially Isolated 00:10:38.228 RUH Desc #003: RUH Type: Initially Isolated 00:10:38.228 RUH Desc #004: RUH Type: Initially Isolated 00:10:38.228 RUH Desc #005: RUH Type: Initially Isolated 00:10:38.228 RUH Desc #006: RUH Type: Initially Isolated 00:10:38.228 RUH Desc #007: RUH Type: Initially Isolated 00:10:38.228 00:10:38.228 FDP reclaim unit handle usage log page 00:10:38.228 ====================================== 00:10:38.228 Number of Reclaim Unit Handles: 8 00:10:38.228 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:38.228 RUH Usage Desc #001: RUH Attributes: Unused 00:10:38.228 RUH Usage Desc #002: RUH Attributes: Unused 00:10:38.228 RUH Usage Desc #003: RUH Attributes: Unused 00:10:38.228 RUH Usage Desc #004: RUH Attributes: Unused 00:10:38.228 RUH Usage Desc #005: RUH Attributes: Unused 00:10:38.228 RUH Usage Desc #006: RUH Attributes: Unused 00:10:38.228 RUH Usage Desc #007: RUH Attributes: Unused 00:10:38.228 00:10:38.228 FDP statistics log page 00:10:38.228 ======================= 00:10:38.228 Host bytes with metadata written: 1757130752 00:10:38.228 Media bytes with metadata written: 1758146560 00:10:38.228 Media bytes erased: 0 00:10:38.228 00:10:38.228 FDP Reclaim unit handle status 00:10:38.228 ============================== 00:10:38.228 Number of RUHS descriptors: 2 00:10:38.228 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003445 00:10:38.228 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:38.228 00:10:38.228 FDP write on placement id: 0 success 00:10:38.228 00:10:38.228 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:38.228 00:10:38.228 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:38.228 00:10:38.228 Get Feature: FDP Events for Placement handle: #0 00:10:38.228 ======================== 00:10:38.228 Number of FDP Events: 6 00:10:38.228 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:38.228 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:38.228 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:38.228 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:38.228 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:38.228 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:38.228 00:10:38.228 FDP events log page 00:10:38.228 =================== 00:10:38.228 Number of FDP events: 1 00:10:38.228 FDP Event #0: 00:10:38.228 Event Type: RU Not Written to Capacity 00:10:38.228 Placement Identifier: Valid 00:10:38.228 NSID: Valid 00:10:38.228 Location: Valid 00:10:38.228 Placement Identifier: 0 00:10:38.228 Event Timestamp: 3 00:10:38.228 Namespace Identifier: 1 00:10:38.228 Reclaim Group Identifier: 0 00:10:38.228 Reclaim Unit Handle Identifier: 0 00:10:38.228 00:10:38.228 FDP test passed 00:10:38.228 00:10:38.228 real 0m0.234s 00:10:38.228 user 0m0.077s 00:10:38.228 sys 0m0.056s 00:10:38.228 ************************************ 00:10:38.228 END TEST nvme_flexible_data_placement 00:10:38.228 ************************************ 00:10:38.228 05:57:29 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:38.228 05:57:29 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:38.228 05:57:29 nvme_fdp -- common/autotest_common.sh@1142 -- # return 0 00:10:38.228 ************************************ 00:10:38.228 END TEST nvme_fdp 00:10:38.228 ************************************ 00:10:38.228 00:10:38.228 real 0m7.871s 00:10:38.228 user 0m1.245s 00:10:38.228 sys 0m1.617s 00:10:38.228 05:57:29 nvme_fdp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:38.228 05:57:29 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:38.228 05:57:29 -- common/autotest_common.sh@1142 -- # return 0 00:10:38.228 05:57:29 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:10:38.228 05:57:29 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:38.228 05:57:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:38.228 05:57:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:38.228 05:57:29 -- common/autotest_common.sh@10 -- # set +x 00:10:38.228 ************************************ 00:10:38.228 START TEST nvme_rpc 00:10:38.228 ************************************ 00:10:38.228 05:57:29 nvme_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:38.488 * Looking for test storage... 00:10:38.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:38.488 05:57:29 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:38.488 05:57:29 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:38.488 05:57:29 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:10:38.488 05:57:29 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:10:38.488 05:57:29 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:10:38.488 05:57:29 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:10:38.488 05:57:29 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:10:38.488 05:57:29 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:10:38.488 05:57:29 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:38.488 05:57:29 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:10:38.488 05:57:29 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:38.488 05:57:30 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:10:38.488 05:57:30 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:38.488 05:57:30 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:10:38.488 05:57:30 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:38.488 05:57:30 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=83049 00:10:38.488 05:57:30 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:38.488 05:57:30 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:38.488 05:57:30 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 83049 00:10:38.488 05:57:30 nvme_rpc -- common/autotest_common.sh@829 -- # '[' -z 83049 ']' 00:10:38.488 05:57:30 nvme_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.488 05:57:30 nvme_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:38.488 05:57:30 nvme_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.488 05:57:30 nvme_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:38.488 05:57:30 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:38.488 [2024-07-13 05:57:30.167043] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:38.488 [2024-07-13 05:57:30.167439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83049 ] 00:10:38.747 [2024-07-13 05:57:30.318489] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:38.747 [2024-07-13 05:57:30.363743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.747 [2024-07-13 05:57:30.363771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.683 05:57:31 nvme_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:39.683 05:57:31 nvme_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:39.683 05:57:31 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:39.683 Nvme0n1 00:10:39.941 05:57:31 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:39.941 05:57:31 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:39.941 request: 00:10:39.941 { 00:10:39.941 "bdev_name": "Nvme0n1", 00:10:39.941 "filename": "non_existing_file", 00:10:39.941 "method": "bdev_nvme_apply_firmware", 00:10:39.941 "req_id": 1 00:10:39.941 } 00:10:39.941 Got JSON-RPC error response 00:10:39.941 response: 00:10:39.941 { 00:10:39.941 "code": -32603, 00:10:39.941 "message": "open file failed." 00:10:39.941 } 00:10:40.199 05:57:31 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:40.199 05:57:31 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:40.199 05:57:31 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:40.199 05:57:31 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:40.199 05:57:31 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 83049 00:10:40.199 05:57:31 nvme_rpc -- common/autotest_common.sh@948 -- # '[' -z 83049 ']' 00:10:40.199 05:57:31 nvme_rpc -- common/autotest_common.sh@952 -- # kill -0 83049 00:10:40.199 05:57:31 nvme_rpc -- common/autotest_common.sh@953 -- # uname 00:10:40.199 05:57:31 nvme_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:40.199 05:57:31 nvme_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83049 00:10:40.457 killing process with pid 83049 00:10:40.457 05:57:31 nvme_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:40.457 05:57:31 nvme_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:40.457 05:57:31 nvme_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83049' 00:10:40.457 05:57:31 nvme_rpc -- common/autotest_common.sh@967 -- # kill 83049 00:10:40.457 05:57:31 nvme_rpc -- common/autotest_common.sh@972 -- # wait 83049 00:10:40.715 00:10:40.715 real 0m2.348s 00:10:40.715 user 0m4.780s 00:10:40.715 sys 0m0.536s 00:10:40.715 05:57:32 nvme_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:40.715 ************************************ 00:10:40.715 END TEST nvme_rpc 00:10:40.715 05:57:32 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.715 ************************************ 00:10:40.715 05:57:32 -- common/autotest_common.sh@1142 -- # return 0 00:10:40.715 05:57:32 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:40.715 05:57:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:40.715 05:57:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:40.715 05:57:32 -- common/autotest_common.sh@10 -- # set +x 00:10:40.715 ************************************ 00:10:40.715 START TEST nvme_rpc_timeouts 00:10:40.715 ************************************ 00:10:40.715 05:57:32 nvme_rpc_timeouts -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:40.715 * Looking for test storage... 00:10:40.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:40.715 05:57:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:40.715 05:57:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_83109 00:10:40.715 05:57:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_83109 00:10:40.715 05:57:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=83133 00:10:40.715 05:57:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:40.715 05:57:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:40.715 05:57:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 83133 00:10:40.715 05:57:32 nvme_rpc_timeouts -- common/autotest_common.sh@829 -- # '[' -z 83133 ']' 00:10:40.715 05:57:32 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.715 05:57:32 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:40.715 05:57:32 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.715 05:57:32 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:40.715 05:57:32 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:40.973 [2024-07-13 05:57:32.484884] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:10:40.973 [2024-07-13 05:57:32.485071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83133 ] 00:10:40.973 [2024-07-13 05:57:32.634618] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:40.973 [2024-07-13 05:57:32.672214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.973 [2024-07-13 05:57:32.672283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.907 05:57:33 nvme_rpc_timeouts -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:41.907 Checking default timeout settings: 00:10:41.907 05:57:33 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # return 0 00:10:41.907 05:57:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:41.907 05:57:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:42.165 Making settings changes with rpc: 00:10:42.165 05:57:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:42.165 05:57:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:42.424 Check default vs. modified settings: 00:10:42.424 05:57:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:42.424 05:57:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_83109 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_83109 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:42.682 Setting action_on_timeout is changed as expected. 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_83109 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_83109 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:42.682 Setting timeout_us is changed as expected. 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_83109 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_83109 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:42.682 Setting timeout_admin_us is changed as expected. 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_83109 /tmp/settings_modified_83109 00:10:42.682 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 83133 00:10:42.682 05:57:34 nvme_rpc_timeouts -- common/autotest_common.sh@948 -- # '[' -z 83133 ']' 00:10:42.682 05:57:34 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # kill -0 83133 00:10:42.682 05:57:34 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # uname 00:10:42.682 05:57:34 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:42.682 05:57:34 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83133 00:10:42.682 05:57:34 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:42.682 killing process with pid 83133 00:10:42.682 05:57:34 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:42.682 05:57:34 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83133' 00:10:42.682 05:57:34 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # kill 83133 00:10:42.682 05:57:34 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # wait 83133 00:10:43.248 RPC TIMEOUT SETTING TEST PASSED. 00:10:43.248 05:57:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:43.248 00:10:43.248 real 0m2.380s 00:10:43.248 user 0m4.946s 00:10:43.248 sys 0m0.476s 00:10:43.248 05:57:34 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:43.248 ************************************ 00:10:43.248 END TEST nvme_rpc_timeouts 00:10:43.248 ************************************ 00:10:43.248 05:57:34 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:43.248 05:57:34 -- common/autotest_common.sh@1142 -- # return 0 00:10:43.248 05:57:34 -- spdk/autotest.sh@243 -- # uname -s 00:10:43.248 05:57:34 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:10:43.248 05:57:34 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:43.248 05:57:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:43.248 05:57:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:43.248 05:57:34 -- common/autotest_common.sh@10 -- # set +x 00:10:43.248 ************************************ 00:10:43.248 START TEST sw_hotplug 00:10:43.248 ************************************ 00:10:43.248 05:57:34 sw_hotplug -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:43.248 * Looking for test storage... 00:10:43.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:43.248 05:57:34 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:43.506 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:43.763 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:43.763 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:43.763 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:43.763 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:43.763 05:57:35 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:43.763 05:57:35 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:43.763 05:57:35 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:43.763 05:57:35 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@230 -- # local class 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@15 -- # local i 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:10:43.763 05:57:35 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@15 -- # local i 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@15 -- # local i 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@15 -- # local i 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:10:43.764 05:57:35 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:43.764 05:57:35 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:43.764 05:57:35 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:43.764 05:57:35 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:44.022 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:44.281 Waiting for block devices as requested 00:10:44.281 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:44.539 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:44.539 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:44.539 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:49.798 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:49.798 05:57:41 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:49.798 05:57:41 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:50.074 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:50.074 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:50.074 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:50.345 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:50.604 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:50.604 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:50.863 05:57:42 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:50.863 05:57:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.863 05:57:42 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:50.863 05:57:42 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:50.863 05:57:42 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=83972 00:10:50.863 05:57:42 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:50.863 05:57:42 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:50.863 05:57:42 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:50.863 05:57:42 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:50.863 05:57:42 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:10:50.863 05:57:42 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:10:50.863 05:57:42 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:10:50.863 05:57:42 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:10:50.863 05:57:42 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 false 00:10:50.863 05:57:42 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:50.863 05:57:42 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:50.863 05:57:42 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:50.863 05:57:42 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:50.863 05:57:42 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:51.121 Initializing NVMe Controllers 00:10:51.121 Attaching to 0000:00:10.0 00:10:51.121 Attaching to 0000:00:11.0 00:10:51.121 Attached to 0000:00:10.0 00:10:51.121 Attached to 0000:00:11.0 00:10:51.121 Initialization complete. Starting I/O... 00:10:51.121 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:51.121 QEMU NVMe Ctrl (12341 ): 4 I/Os completed (+4) 00:10:51.121 00:10:52.054 QEMU NVMe Ctrl (12340 ): 1152 I/Os completed (+1152) 00:10:52.054 QEMU NVMe Ctrl (12341 ): 1306 I/Os completed (+1302) 00:10:52.054 00:10:52.989 QEMU NVMe Ctrl (12340 ): 2892 I/Os completed (+1740) 00:10:52.989 QEMU NVMe Ctrl (12341 ): 3135 I/Os completed (+1829) 00:10:52.989 00:10:54.365 QEMU NVMe Ctrl (12340 ): 4978 I/Os completed (+2086) 00:10:54.365 QEMU NVMe Ctrl (12341 ): 5329 I/Os completed (+2194) 00:10:54.365 00:10:55.302 QEMU NVMe Ctrl (12340 ): 7022 I/Os completed (+2044) 00:10:55.302 QEMU NVMe Ctrl (12341 ): 7411 I/Os completed (+2082) 00:10:55.302 00:10:56.238 QEMU NVMe Ctrl (12340 ): 9186 I/Os completed (+2164) 00:10:56.238 QEMU NVMe Ctrl (12341 ): 9646 I/Os completed (+2235) 00:10:56.238 00:10:56.805 05:57:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:56.805 05:57:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:56.805 05:57:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:56.805 [2024-07-13 05:57:48.506339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:56.805 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:56.805 [2024-07-13 05:57:48.508198] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.805 [2024-07-13 05:57:48.508309] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.805 [2024-07-13 05:57:48.508371] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.805 [2024-07-13 05:57:48.508413] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.805 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:56.805 [2024-07-13 05:57:48.510808] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.805 [2024-07-13 05:57:48.510894] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.805 [2024-07-13 05:57:48.510936] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.805 [2024-07-13 05:57:48.510978] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:56.805 05:57:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:56.805 05:57:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:57.065 [2024-07-13 05:57:48.536656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:57.065 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:57.065 [2024-07-13 05:57:48.538561] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.065 [2024-07-13 05:57:48.538671] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.065 [2024-07-13 05:57:48.538717] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.065 [2024-07-13 05:57:48.538760] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.065 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:57.065 [2024-07-13 05:57:48.542847] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.065 [2024-07-13 05:57:48.542914] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.065 [2024-07-13 05:57:48.542961] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.065 [2024-07-13 05:57:48.543001] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.065 05:57:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:57.065 05:57:48 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:57.065 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:57.065 EAL: Scan for (pci) bus failed. 00:10:57.065 05:57:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:57.065 05:57:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:57.065 05:57:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:57.065 00:10:57.065 05:57:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:57.065 05:57:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:57.065 05:57:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:57.065 05:57:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:57.065 05:57:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:57.065 Attaching to 0000:00:10.0 00:10:57.065 Attached to 0000:00:10.0 00:10:57.326 05:57:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:57.326 05:57:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:57.326 05:57:48 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:57.326 Attaching to 0000:00:11.0 00:10:57.326 Attached to 0000:00:11.0 00:10:58.263 QEMU NVMe Ctrl (12340 ): 2027 I/Os completed (+2027) 00:10:58.263 QEMU NVMe Ctrl (12341 ): 1805 I/Os completed (+1805) 00:10:58.263 00:10:59.200 QEMU NVMe Ctrl (12340 ): 4090 I/Os completed (+2063) 00:10:59.200 QEMU NVMe Ctrl (12341 ): 3967 I/Os completed (+2162) 00:10:59.200 00:11:00.135 QEMU NVMe Ctrl (12340 ): 6302 I/Os completed (+2212) 00:11:00.135 QEMU NVMe Ctrl (12341 ): 6259 I/Os completed (+2292) 00:11:00.135 00:11:01.070 QEMU NVMe Ctrl (12340 ): 8301 I/Os completed (+1999) 00:11:01.070 QEMU NVMe Ctrl (12341 ): 8398 I/Os completed (+2139) 00:11:01.070 00:11:02.005 QEMU NVMe Ctrl (12340 ): 10415 I/Os completed (+2114) 00:11:02.005 QEMU NVMe Ctrl (12341 ): 10621 I/Os completed (+2223) 00:11:02.005 00:11:03.381 QEMU NVMe Ctrl (12340 ): 12592 I/Os completed (+2177) 00:11:03.381 QEMU NVMe Ctrl (12341 ): 12888 I/Os completed (+2267) 00:11:03.381 00:11:04.317 QEMU NVMe Ctrl (12340 ): 14896 I/Os completed (+2304) 00:11:04.317 QEMU NVMe Ctrl (12341 ): 15232 I/Os completed (+2344) 00:11:04.317 00:11:05.273 QEMU NVMe Ctrl (12340 ): 17102 I/Os completed (+2206) 00:11:05.273 QEMU NVMe Ctrl (12341 ): 17532 I/Os completed (+2300) 00:11:05.273 00:11:06.207 QEMU NVMe Ctrl (12340 ): 19189 I/Os completed (+2087) 00:11:06.207 QEMU NVMe Ctrl (12341 ): 19730 I/Os completed (+2198) 00:11:06.207 00:11:07.144 QEMU NVMe Ctrl (12340 ): 21249 I/Os completed (+2060) 00:11:07.144 QEMU NVMe Ctrl (12341 ): 21915 I/Os completed (+2185) 00:11:07.144 00:11:08.086 QEMU NVMe Ctrl (12340 ): 23325 I/Os completed (+2076) 00:11:08.086 QEMU NVMe Ctrl (12341 ): 24114 I/Os completed (+2199) 00:11:08.086 00:11:09.022 QEMU NVMe Ctrl (12340 ): 25405 I/Os completed (+2080) 00:11:09.022 QEMU NVMe Ctrl (12341 ): 26319 I/Os completed (+2205) 00:11:09.022 00:11:09.281 05:58:00 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:09.281 05:58:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:09.281 05:58:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:09.281 05:58:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:09.281 [2024-07-13 05:58:00.864128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:09.281 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:09.281 [2024-07-13 05:58:00.866109] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.281 [2024-07-13 05:58:00.866191] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.281 [2024-07-13 05:58:00.866222] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.281 [2024-07-13 05:58:00.866263] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.281 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:09.281 [2024-07-13 05:58:00.868590] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.281 [2024-07-13 05:58:00.868694] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.281 [2024-07-13 05:58:00.868727] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.281 [2024-07-13 05:58:00.868762] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.281 05:58:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:09.281 05:58:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:09.281 [2024-07-13 05:58:00.896036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:09.281 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:09.281 [2024-07-13 05:58:00.897824] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.281 [2024-07-13 05:58:00.897896] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.281 [2024-07-13 05:58:00.897930] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.281 [2024-07-13 05:58:00.897954] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.281 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:09.281 [2024-07-13 05:58:00.899874] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.281 [2024-07-13 05:58:00.899947] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.281 [2024-07-13 05:58:00.899979] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.281 [2024-07-13 05:58:00.900004] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.281 05:58:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:09.281 05:58:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:09.281 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:09.281 EAL: Scan for (pci) bus failed. 00:11:09.539 05:58:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:09.539 05:58:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:09.539 05:58:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:09.539 05:58:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:09.539 05:58:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:09.539 05:58:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:09.539 05:58:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:09.539 05:58:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:09.539 Attaching to 0000:00:10.0 00:11:09.539 Attached to 0000:00:10.0 00:11:09.539 05:58:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:09.539 05:58:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:09.539 05:58:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:09.539 Attaching to 0000:00:11.0 00:11:09.539 Attached to 0000:00:11.0 00:11:10.105 QEMU NVMe Ctrl (12340 ): 1279 I/Os completed (+1279) 00:11:10.105 QEMU NVMe Ctrl (12341 ): 1085 I/Os completed (+1085) 00:11:10.105 00:11:11.040 QEMU NVMe Ctrl (12340 ): 3467 I/Os completed (+2188) 00:11:11.040 QEMU NVMe Ctrl (12341 ): 3337 I/Os completed (+2252) 00:11:11.040 00:11:11.981 QEMU NVMe Ctrl (12340 ): 5575 I/Os completed (+2108) 00:11:11.981 QEMU NVMe Ctrl (12341 ): 5548 I/Os completed (+2211) 00:11:11.981 00:11:13.358 QEMU NVMe Ctrl (12340 ): 7775 I/Os completed (+2200) 00:11:13.358 QEMU NVMe Ctrl (12341 ): 7801 I/Os completed (+2253) 00:11:13.358 00:11:14.293 QEMU NVMe Ctrl (12340 ): 9979 I/Os completed (+2204) 00:11:14.293 QEMU NVMe Ctrl (12341 ): 10061 I/Os completed (+2260) 00:11:14.293 00:11:15.227 QEMU NVMe Ctrl (12340 ): 12175 I/Os completed (+2196) 00:11:15.227 QEMU NVMe Ctrl (12341 ): 12329 I/Os completed (+2268) 00:11:15.227 00:11:16.162 QEMU NVMe Ctrl (12340 ): 14375 I/Os completed (+2200) 00:11:16.162 QEMU NVMe Ctrl (12341 ): 14609 I/Os completed (+2280) 00:11:16.162 00:11:17.098 QEMU NVMe Ctrl (12340 ): 16575 I/Os completed (+2200) 00:11:17.098 QEMU NVMe Ctrl (12341 ): 16855 I/Os completed (+2246) 00:11:17.098 00:11:18.034 QEMU NVMe Ctrl (12340 ): 18727 I/Os completed (+2152) 00:11:18.034 QEMU NVMe Ctrl (12341 ): 19129 I/Os completed (+2274) 00:11:18.034 00:11:18.970 QEMU NVMe Ctrl (12340 ): 20887 I/Os completed (+2160) 00:11:18.970 QEMU NVMe Ctrl (12341 ): 21360 I/Os completed (+2231) 00:11:18.970 00:11:20.347 QEMU NVMe Ctrl (12340 ): 23139 I/Os completed (+2252) 00:11:20.347 QEMU NVMe Ctrl (12341 ): 23691 I/Os completed (+2331) 00:11:20.347 00:11:21.319 QEMU NVMe Ctrl (12340 ): 25301 I/Os completed (+2162) 00:11:21.319 QEMU NVMe Ctrl (12341 ): 25911 I/Os completed (+2220) 00:11:21.319 00:11:21.577 05:58:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:21.577 05:58:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:21.577 05:58:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:21.577 05:58:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:21.577 [2024-07-13 05:58:13.230047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:21.577 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:21.577 [2024-07-13 05:58:13.233657] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.577 [2024-07-13 05:58:13.233723] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.577 [2024-07-13 05:58:13.233749] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.577 [2024-07-13 05:58:13.233775] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.577 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:21.577 [2024-07-13 05:58:13.235724] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.577 [2024-07-13 05:58:13.235774] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.577 [2024-07-13 05:58:13.235798] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.577 [2024-07-13 05:58:13.235821] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.577 05:58:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:21.577 05:58:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:21.577 [2024-07-13 05:58:13.262412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:21.577 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:21.577 [2024-07-13 05:58:13.263918] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.577 [2024-07-13 05:58:13.263972] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.577 [2024-07-13 05:58:13.263999] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.577 [2024-07-13 05:58:13.264019] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.577 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:21.577 [2024-07-13 05:58:13.265573] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.577 [2024-07-13 05:58:13.265617] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.577 [2024-07-13 05:58:13.265642] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.577 [2024-07-13 05:58:13.265660] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.577 05:58:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:21.577 05:58:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:21.577 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:21.577 EAL: Scan for (pci) bus failed. 00:11:21.835 05:58:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:21.835 05:58:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:21.835 05:58:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:21.835 05:58:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:21.835 05:58:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:21.835 05:58:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:21.835 05:58:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:21.835 05:58:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:21.835 Attaching to 0000:00:10.0 00:11:21.835 Attached to 0000:00:10.0 00:11:21.835 05:58:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:22.092 05:58:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:22.092 05:58:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:22.092 Attaching to 0000:00:11.0 00:11:22.092 Attached to 0000:00:11.0 00:11:22.092 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:22.092 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:22.092 [2024-07-13 05:58:13.583332] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:34.288 05:58:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:34.288 05:58:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:34.288 05:58:25 sw_hotplug -- common/autotest_common.sh@715 -- # time=43.08 00:11:34.288 05:58:25 sw_hotplug -- common/autotest_common.sh@716 -- # echo 43.08 00:11:34.288 05:58:25 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:11:34.288 05:58:25 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.08 00:11:34.288 05:58:25 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.08 2 00:11:34.288 remove_attach_helper took 43.08s to complete (handling 2 nvme drive(s)) 05:58:25 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:40.862 05:58:31 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 83972 00:11:40.862 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (83972) - No such process 00:11:40.862 05:58:31 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 83972 00:11:40.862 05:58:31 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:40.862 05:58:31 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:40.862 05:58:31 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:40.862 05:58:31 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=84525 00:11:40.862 05:58:31 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:40.862 05:58:31 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:40.862 05:58:31 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 84525 00:11:40.862 05:58:31 sw_hotplug -- common/autotest_common.sh@829 -- # '[' -z 84525 ']' 00:11:40.862 05:58:31 sw_hotplug -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.862 05:58:31 sw_hotplug -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:40.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.862 05:58:31 sw_hotplug -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.862 05:58:31 sw_hotplug -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:40.862 05:58:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:40.862 [2024-07-13 05:58:31.700743] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:40.862 [2024-07-13 05:58:31.700926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84525 ] 00:11:40.862 [2024-07-13 05:58:31.844436] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.862 [2024-07-13 05:58:31.877417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.862 05:58:32 sw_hotplug -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:40.862 05:58:32 sw_hotplug -- common/autotest_common.sh@862 -- # return 0 00:11:40.862 05:58:32 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:40.862 05:58:32 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.862 05:58:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:40.862 05:58:32 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.862 05:58:32 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:40.862 05:58:32 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:40.862 05:58:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:40.862 05:58:32 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:11:40.862 05:58:32 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:11:40.862 05:58:32 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:11:40.862 05:58:32 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:11:40.862 05:58:32 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:11:40.862 05:58:32 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:40.862 05:58:32 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:40.862 05:58:32 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:40.862 05:58:32 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:40.862 05:58:32 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:47.431 05:58:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:47.431 05:58:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:47.431 05:58:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:47.431 05:58:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:47.431 05:58:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:47.431 05:58:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:47.431 05:58:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:47.431 05:58:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:47.431 05:58:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:47.431 05:58:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:47.431 05:58:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:47.431 05:58:38 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.431 05:58:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:47.431 [2024-07-13 05:58:38.662487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:47.431 05:58:38 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.431 [2024-07-13 05:58:38.664760] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.431 [2024-07-13 05:58:38.664803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.431 [2024-07-13 05:58:38.664823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.431 [2024-07-13 05:58:38.664854] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.431 [2024-07-13 05:58:38.664868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.431 [2024-07-13 05:58:38.664884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.431 [2024-07-13 05:58:38.664897] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.431 [2024-07-13 05:58:38.664913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.431 [2024-07-13 05:58:38.664926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.431 [2024-07-13 05:58:38.664941] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.431 [2024-07-13 05:58:38.664953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.431 [2024-07-13 05:58:38.664967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.431 05:58:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:47.431 05:58:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:47.689 [2024-07-13 05:58:39.162485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:47.689 [2024-07-13 05:58:39.164742] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.689 [2024-07-13 05:58:39.164803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.689 [2024-07-13 05:58:39.164828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.690 [2024-07-13 05:58:39.164856] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.690 [2024-07-13 05:58:39.164872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.690 [2024-07-13 05:58:39.164885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.690 [2024-07-13 05:58:39.164901] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.690 [2024-07-13 05:58:39.164913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.690 [2024-07-13 05:58:39.164927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.690 [2024-07-13 05:58:39.164940] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.690 [2024-07-13 05:58:39.164958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.690 [2024-07-13 05:58:39.164971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.690 05:58:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:47.690 05:58:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:47.690 05:58:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:47.690 05:58:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:47.690 05:58:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:47.690 05:58:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:47.690 05:58:39 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.690 05:58:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:47.690 05:58:39 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.690 05:58:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:47.690 05:58:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:47.690 05:58:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:47.690 05:58:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:47.690 05:58:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:47.948 05:58:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:47.948 05:58:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:47.948 05:58:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:47.948 05:58:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:47.948 05:58:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:47.948 05:58:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:47.948 05:58:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:47.948 05:58:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:00.160 05:58:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:00.160 05:58:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:00.160 05:58:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:00.160 05:58:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.160 05:58:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.160 05:58:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.160 05:58:51 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.160 05:58:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.160 05:58:51 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.160 05:58:51 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:00.160 05:58:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:00.160 05:58:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:00.160 05:58:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:00.160 05:58:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:00.160 05:58:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:00.160 05:58:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:00.160 05:58:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:00.160 05:58:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:00.160 [2024-07-13 05:58:51.662679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:00.160 05:58:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.160 [2024-07-13 05:58:51.665327] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.160 [2024-07-13 05:58:51.665438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.160 [2024-07-13 05:58:51.665611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.160 [2024-07-13 05:58:51.665705] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.160 [2024-07-13 05:58:51.665756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.160 [2024-07-13 05:58:51.665828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.160 [2024-07-13 05:58:51.665892] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.160 [2024-07-13 05:58:51.665938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.160 [2024-07-13 05:58:51.666001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.160 [2024-07-13 05:58:51.666070] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.160 [2024-07-13 05:58:51.666114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.160 [2024-07-13 05:58:51.666207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.160 05:58:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.160 05:58:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.160 05:58:51 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.160 05:58:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.160 05:58:51 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.160 05:58:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:00.160 05:58:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:00.741 [2024-07-13 05:58:52.162674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:00.741 [2024-07-13 05:58:52.165156] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.741 [2024-07-13 05:58:52.165392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.741 [2024-07-13 05:58:52.165559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.741 [2024-07-13 05:58:52.165822] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.741 [2024-07-13 05:58:52.165884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.741 [2024-07-13 05:58:52.166064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.741 [2024-07-13 05:58:52.166151] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.741 [2024-07-13 05:58:52.166278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.741 [2024-07-13 05:58:52.166355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.741 [2024-07-13 05:58:52.166529] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.741 [2024-07-13 05:58:52.166771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.741 [2024-07-13 05:58:52.166926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.741 05:58:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:00.741 05:58:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:00.741 05:58:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:00.741 05:58:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.741 05:58:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.741 05:58:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.741 05:58:52 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.741 05:58:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.741 05:58:52 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.741 05:58:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:00.741 05:58:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:00.741 05:58:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:00.741 05:58:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:00.741 05:58:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:00.741 05:58:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:01.000 05:58:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:01.000 05:58:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:01.000 05:58:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:01.000 05:58:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:01.000 05:58:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:01.000 05:58:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:01.000 05:58:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:13.202 05:59:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:13.202 05:59:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:13.202 05:59:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:13.202 05:59:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:13.203 05:59:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:13.203 05:59:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:13.203 05:59:04 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.203 05:59:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:13.203 05:59:04 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.203 05:59:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:13.203 05:59:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:13.203 05:59:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:13.203 05:59:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:13.203 [2024-07-13 05:59:04.662812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:13.203 [2024-07-13 05:59:04.665521] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.203 [2024-07-13 05:59:04.665860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.203 [2024-07-13 05:59:04.666071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.203 [2024-07-13 05:59:04.666277] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.203 [2024-07-13 05:59:04.666518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.203 [2024-07-13 05:59:04.666553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.203 [2024-07-13 05:59:04.666573] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.203 [2024-07-13 05:59:04.666593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.203 [2024-07-13 05:59:04.666608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.203 [2024-07-13 05:59:04.666626] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.203 [2024-07-13 05:59:04.666640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.203 [2024-07-13 05:59:04.666656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.203 05:59:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:13.203 05:59:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:13.203 05:59:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:13.203 05:59:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:13.203 05:59:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:13.203 05:59:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:13.203 05:59:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:13.203 05:59:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:13.203 05:59:04 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.203 05:59:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:13.203 05:59:04 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.203 05:59:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:13.203 05:59:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:13.461 [2024-07-13 05:59:05.162814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:13.461 [2024-07-13 05:59:05.165458] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.461 [2024-07-13 05:59:05.165523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.461 [2024-07-13 05:59:05.165550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.461 [2024-07-13 05:59:05.165572] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.461 [2024-07-13 05:59:05.165619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.461 [2024-07-13 05:59:05.165633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.461 [2024-07-13 05:59:05.165653] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.462 [2024-07-13 05:59:05.165681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.462 [2024-07-13 05:59:05.165696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.462 [2024-07-13 05:59:05.165709] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.462 [2024-07-13 05:59:05.165723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.462 [2024-07-13 05:59:05.165736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.720 05:59:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:13.720 05:59:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:13.720 05:59:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:13.720 05:59:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:13.720 05:59:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:13.720 05:59:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:13.720 05:59:05 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.720 05:59:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:13.720 05:59:05 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.720 05:59:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:13.720 05:59:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:13.720 05:59:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:13.720 05:59:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:13.720 05:59:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:13.978 05:59:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:13.978 05:59:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:13.978 05:59:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:13.978 05:59:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:13.978 05:59:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:13.978 05:59:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:13.979 05:59:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:13.979 05:59:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:26.182 05:59:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:26.182 05:59:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:26.182 05:59:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:26.182 05:59:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:26.182 05:59:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:26.182 05:59:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:26.182 05:59:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.182 05:59:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:26.182 05:59:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.182 05:59:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:26.183 05:59:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:26.183 05:59:17 sw_hotplug -- common/autotest_common.sh@715 -- # time=45.10 00:12:26.183 05:59:17 sw_hotplug -- common/autotest_common.sh@716 -- # echo 45.10 00:12:26.183 05:59:17 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:12:26.183 05:59:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.10 00:12:26.183 05:59:17 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.10 2 00:12:26.183 remove_attach_helper took 45.10s to complete (handling 2 nvme drive(s)) 05:59:17 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:26.183 05:59:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.183 05:59:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:26.183 05:59:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.183 05:59:17 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:26.183 05:59:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.183 05:59:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:26.183 05:59:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.183 05:59:17 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:26.183 05:59:17 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:26.183 05:59:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:26.183 05:59:17 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:12:26.183 05:59:17 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:12:26.183 05:59:17 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:12:26.183 05:59:17 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:12:26.183 05:59:17 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:12:26.183 05:59:17 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:26.183 05:59:17 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:26.183 05:59:17 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:26.183 05:59:17 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:26.183 05:59:17 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:32.748 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:32.748 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:32.748 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:32.748 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:32.748 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:32.748 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:32.748 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:32.748 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:32.748 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:32.748 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:32.748 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:32.748 05:59:23 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.748 05:59:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:32.748 [2024-07-13 05:59:23.794776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:32.748 [2024-07-13 05:59:23.796594] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:32.748 [2024-07-13 05:59:23.796825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.748 [2024-07-13 05:59:23.797011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.748 [2024-07-13 05:59:23.797265] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:32.748 [2024-07-13 05:59:23.797455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.748 [2024-07-13 05:59:23.797658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.748 [2024-07-13 05:59:23.797811] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:32.748 [2024-07-13 05:59:23.797951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.748 [2024-07-13 05:59:23.797978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.748 [2024-07-13 05:59:23.798005] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:32.748 [2024-07-13 05:59:23.798021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.748 [2024-07-13 05:59:23.798038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.748 05:59:23 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.748 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:32.748 05:59:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:32.748 [2024-07-13 05:59:24.194792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:32.748 [2024-07-13 05:59:24.196260] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:32.748 [2024-07-13 05:59:24.196346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.748 [2024-07-13 05:59:24.196371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.748 [2024-07-13 05:59:24.196391] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:32.748 [2024-07-13 05:59:24.196407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.748 [2024-07-13 05:59:24.196420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.748 [2024-07-13 05:59:24.196435] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:32.748 [2024-07-13 05:59:24.196448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.748 [2024-07-13 05:59:24.196462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.748 [2024-07-13 05:59:24.196475] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:32.748 [2024-07-13 05:59:24.196488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.748 [2024-07-13 05:59:24.196501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.748 05:59:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:32.748 05:59:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:32.748 05:59:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:32.748 05:59:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:32.748 05:59:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:32.748 05:59:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:32.748 05:59:24 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.748 05:59:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:32.748 05:59:24 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.748 05:59:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:32.748 05:59:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:33.007 05:59:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:33.007 05:59:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:33.007 05:59:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:33.007 05:59:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:33.007 05:59:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:33.007 05:59:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:33.007 05:59:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:33.007 05:59:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:33.007 05:59:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:33.007 05:59:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:33.007 05:59:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:45.314 05:59:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:45.314 05:59:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:45.314 05:59:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:45.314 05:59:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:45.314 05:59:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:45.314 05:59:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:45.314 05:59:36 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.314 05:59:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:45.314 05:59:36 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.314 05:59:36 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:45.314 05:59:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:45.314 05:59:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:45.314 05:59:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:45.314 05:59:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:45.315 05:59:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:45.315 05:59:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:45.315 05:59:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:45.315 05:59:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:45.315 05:59:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:45.315 05:59:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:45.315 05:59:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:45.315 05:59:36 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.315 05:59:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:45.315 [2024-07-13 05:59:36.794945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:45.315 [2024-07-13 05:59:36.796475] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:45.315 [2024-07-13 05:59:36.796559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:45.315 [2024-07-13 05:59:36.796582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.315 [2024-07-13 05:59:36.796606] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:45.315 [2024-07-13 05:59:36.796621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:45.315 [2024-07-13 05:59:36.796640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.315 [2024-07-13 05:59:36.796654] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:45.315 [2024-07-13 05:59:36.796669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:45.315 [2024-07-13 05:59:36.796683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.315 [2024-07-13 05:59:36.796698] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:45.315 [2024-07-13 05:59:36.796711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:45.315 [2024-07-13 05:59:36.796727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.315 05:59:36 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.315 05:59:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:45.315 05:59:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:45.573 [2024-07-13 05:59:37.194945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:45.573 [2024-07-13 05:59:37.196522] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:45.573 [2024-07-13 05:59:37.196568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:45.573 [2024-07-13 05:59:37.196598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.573 [2024-07-13 05:59:37.196619] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:45.573 [2024-07-13 05:59:37.196635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:45.573 [2024-07-13 05:59:37.196649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.573 [2024-07-13 05:59:37.196664] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:45.573 [2024-07-13 05:59:37.196677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:45.573 [2024-07-13 05:59:37.196692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.573 [2024-07-13 05:59:37.196705] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:45.573 [2024-07-13 05:59:37.196720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:45.573 [2024-07-13 05:59:37.196733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:45.831 05:59:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:45.831 05:59:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:45.831 05:59:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:45.831 05:59:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:45.831 05:59:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:45.831 05:59:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:45.831 05:59:37 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.831 05:59:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:45.831 05:59:37 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.831 05:59:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:45.831 05:59:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:45.831 05:59:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:45.831 05:59:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:45.831 05:59:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:45.831 05:59:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:46.089 05:59:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:46.089 05:59:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:46.089 05:59:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:46.089 05:59:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:46.089 05:59:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:46.089 05:59:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:46.089 05:59:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:58.291 05:59:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:58.291 05:59:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:58.291 05:59:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:58.291 05:59:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:58.291 05:59:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:58.291 05:59:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:58.291 05:59:49 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.291 05:59:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:58.291 05:59:49 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.291 05:59:49 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:58.291 05:59:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:58.291 05:59:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:58.291 05:59:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:58.291 05:59:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:58.291 05:59:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:58.291 05:59:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:58.291 05:59:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:58.291 05:59:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:58.291 05:59:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:58.291 [2024-07-13 05:59:49.795193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:58.291 05:59:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:58.291 05:59:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:58.291 05:59:49 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.291 05:59:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:58.291 [2024-07-13 05:59:49.800901] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:58.291 [2024-07-13 05:59:49.801091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:58.291 [2024-07-13 05:59:49.801215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:58.291 [2024-07-13 05:59:49.801326] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:58.291 [2024-07-13 05:59:49.801376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:58.291 [2024-07-13 05:59:49.801530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:58.291 [2024-07-13 05:59:49.801604] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:58.291 [2024-07-13 05:59:49.801676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:58.291 [2024-07-13 05:59:49.801751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:58.291 [2024-07-13 05:59:49.801828] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:58.291 [2024-07-13 05:59:49.801879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:58.291 [2024-07-13 05:59:49.801948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:58.291 [2024-07-13 05:59:49.802038] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:12:58.291 [2024-07-13 05:59:49.802117] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:12:58.291 [2024-07-13 05:59:49.802184] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:12:58.291 [2024-07-13 05:59:49.802285] bdev_nvme.c:5228:aer_cb: *WARNING*: AER request execute failed 00:12:58.291 05:59:49 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.291 05:59:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:58.291 05:59:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:58.550 [2024-07-13 05:59:50.195195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:58.550 [2024-07-13 05:59:50.196725] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:58.550 [2024-07-13 05:59:50.196936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:58.550 [2024-07-13 05:59:50.197101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:58.550 [2024-07-13 05:59:50.197406] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:58.550 [2024-07-13 05:59:50.197472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:58.550 [2024-07-13 05:59:50.197650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:58.550 [2024-07-13 05:59:50.197719] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:58.550 [2024-07-13 05:59:50.197767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:58.550 [2024-07-13 05:59:50.197927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:58.550 [2024-07-13 05:59:50.197991] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:58.550 [2024-07-13 05:59:50.198034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:58.550 [2024-07-13 05:59:50.198211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:58.809 05:59:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:58.809 05:59:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:58.809 05:59:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:58.809 05:59:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:58.809 05:59:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:58.809 05:59:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:58.809 05:59:50 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.809 05:59:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:58.809 05:59:50 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.809 05:59:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:58.809 05:59:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:58.809 05:59:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:58.809 05:59:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:58.809 05:59:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:59.068 05:59:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:59.068 05:59:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:59.068 05:59:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:59.068 05:59:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:59.068 05:59:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:59.068 05:59:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:59.068 05:59:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:59.068 05:59:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:11.291 06:00:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:11.291 06:00:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:11.291 06:00:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:11.291 06:00:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:11.291 06:00:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:11.291 06:00:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:11.291 06:00:02 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.291 06:00:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:11.291 06:00:02 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.291 06:00:02 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:11.291 06:00:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:11.291 06:00:02 sw_hotplug -- common/autotest_common.sh@715 -- # time=45.06 00:13:11.291 06:00:02 sw_hotplug -- common/autotest_common.sh@716 -- # echo 45.06 00:13:11.291 06:00:02 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:13:11.291 06:00:02 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.06 00:13:11.291 06:00:02 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.06 2 00:13:11.291 remove_attach_helper took 45.06s to complete (handling 2 nvme drive(s)) 06:00:02 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:13:11.291 06:00:02 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 84525 00:13:11.291 06:00:02 sw_hotplug -- common/autotest_common.sh@948 -- # '[' -z 84525 ']' 00:13:11.291 06:00:02 sw_hotplug -- common/autotest_common.sh@952 -- # kill -0 84525 00:13:11.291 06:00:02 sw_hotplug -- common/autotest_common.sh@953 -- # uname 00:13:11.291 06:00:02 sw_hotplug -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:11.291 06:00:02 sw_hotplug -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84525 00:13:11.291 06:00:02 sw_hotplug -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:11.291 killing process with pid 84525 00:13:11.291 06:00:02 sw_hotplug -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:11.291 06:00:02 sw_hotplug -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84525' 00:13:11.291 06:00:02 sw_hotplug -- common/autotest_common.sh@967 -- # kill 84525 00:13:11.291 06:00:02 sw_hotplug -- common/autotest_common.sh@972 -- # wait 84525 00:13:11.549 06:00:03 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:11.807 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:12.378 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:12.378 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:12.378 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:12.378 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:12.638 00:13:12.638 real 2m29.379s 00:13:12.638 user 1m49.100s 00:13:12.638 sys 0m20.019s 00:13:12.638 ************************************ 00:13:12.638 END TEST sw_hotplug 00:13:12.638 ************************************ 00:13:12.638 06:00:04 sw_hotplug -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:12.638 06:00:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:12.638 06:00:04 -- common/autotest_common.sh@1142 -- # return 0 00:13:12.638 06:00:04 -- spdk/autotest.sh@247 -- # [[ 1 -eq 1 ]] 00:13:12.638 06:00:04 -- spdk/autotest.sh@248 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:12.638 06:00:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:12.638 06:00:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:12.638 06:00:04 -- common/autotest_common.sh@10 -- # set +x 00:13:12.638 ************************************ 00:13:12.638 START TEST nvme_xnvme 00:13:12.638 ************************************ 00:13:12.638 06:00:04 nvme_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:12.638 * Looking for test storage... 00:13:12.638 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:12.638 06:00:04 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:12.638 06:00:04 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:12.638 06:00:04 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:12.638 06:00:04 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:12.638 06:00:04 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.638 06:00:04 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.638 06:00:04 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.638 06:00:04 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:12.638 06:00:04 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.638 06:00:04 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:13:12.638 06:00:04 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:12.638 06:00:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:12.638 06:00:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:12.638 ************************************ 00:13:12.638 START TEST xnvme_to_malloc_dd_copy 00:13:12.638 ************************************ 00:13:12.638 06:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1123 -- # malloc_to_xnvme_copy 00:13:12.638 06:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:13:12.638 06:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:13:12.638 06:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:13:12.638 06:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # return 00:13:12.638 06:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:13:12.638 06:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:13:12.638 06:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:12.638 06:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:13:12.638 06:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:13:12.638 06:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:13:12.638 06:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:13:12.638 06:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:13:12.638 06:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:13:12.638 06:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:13:12.638 06:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:13:12.638 06:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:13:12.638 06:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:12.638 06:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:12.638 06:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:12.638 06:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:12.638 06:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:12.638 06:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:12.638 06:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:12.638 06:00:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:12.638 { 00:13:12.638 "subsystems": [ 00:13:12.638 { 00:13:12.638 "subsystem": "bdev", 00:13:12.638 "config": [ 00:13:12.638 { 00:13:12.638 "params": { 00:13:12.638 "block_size": 512, 00:13:12.638 "num_blocks": 2097152, 00:13:12.638 "name": "malloc0" 00:13:12.638 }, 00:13:12.638 "method": "bdev_malloc_create" 00:13:12.638 }, 00:13:12.639 { 00:13:12.639 "params": { 00:13:12.639 "io_mechanism": "libaio", 00:13:12.639 "filename": "/dev/nullb0", 00:13:12.639 "name": "null0" 00:13:12.639 }, 00:13:12.639 "method": "bdev_xnvme_create" 00:13:12.639 }, 00:13:12.639 { 00:13:12.639 "method": "bdev_wait_for_examine" 00:13:12.639 } 00:13:12.639 ] 00:13:12.639 } 00:13:12.639 ] 00:13:12.639 } 00:13:12.897 [2024-07-13 06:00:04.374913] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:12.897 [2024-07-13 06:00:04.375116] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85850 ] 00:13:12.897 [2024-07-13 06:00:04.529040] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.897 [2024-07-13 06:00:04.572898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.650  Copying: 156/1024 [MB] (156 MBps) Copying: 327/1024 [MB] (171 MBps) Copying: 498/1024 [MB] (170 MBps) Copying: 670/1024 [MB] (172 MBps) Copying: 842/1024 [MB] (171 MBps) Copying: 1014/1024 [MB] (172 MBps) Copying: 1024/1024 [MB] (average 169 MBps) 00:13:19.650 00:13:19.650 06:00:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:19.650 06:00:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:19.650 06:00:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:19.650 06:00:11 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:19.909 { 00:13:19.909 "subsystems": [ 00:13:19.909 { 00:13:19.909 "subsystem": "bdev", 00:13:19.909 "config": [ 00:13:19.909 { 00:13:19.909 "params": { 00:13:19.909 "block_size": 512, 00:13:19.909 "num_blocks": 2097152, 00:13:19.909 "name": "malloc0" 00:13:19.909 }, 00:13:19.909 "method": "bdev_malloc_create" 00:13:19.909 }, 00:13:19.909 { 00:13:19.909 "params": { 00:13:19.909 "io_mechanism": "libaio", 00:13:19.909 "filename": "/dev/nullb0", 00:13:19.909 "name": "null0" 00:13:19.909 }, 00:13:19.909 "method": "bdev_xnvme_create" 00:13:19.909 }, 00:13:19.909 { 00:13:19.909 "method": "bdev_wait_for_examine" 00:13:19.909 } 00:13:19.909 ] 00:13:19.909 } 00:13:19.909 ] 00:13:19.909 } 00:13:19.909 [2024-07-13 06:00:11.406551] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:19.909 [2024-07-13 06:00:11.406755] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85931 ] 00:13:19.909 [2024-07-13 06:00:11.555519] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.909 [2024-07-13 06:00:11.591482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.328  Copying: 182/1024 [MB] (182 MBps) Copying: 362/1024 [MB] (180 MBps) Copying: 543/1024 [MB] (180 MBps) Copying: 726/1024 [MB] (183 MBps) Copying: 908/1024 [MB] (182 MBps) Copying: 1024/1024 [MB] (average 181 MBps) 00:13:26.328 00:13:26.328 06:00:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:26.328 06:00:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:26.328 06:00:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:26.328 06:00:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:26.328 06:00:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:26.328 06:00:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:26.328 { 00:13:26.328 "subsystems": [ 00:13:26.328 { 00:13:26.328 "subsystem": "bdev", 00:13:26.328 "config": [ 00:13:26.328 { 00:13:26.328 "params": { 00:13:26.328 "block_size": 512, 00:13:26.328 "num_blocks": 2097152, 00:13:26.328 "name": "malloc0" 00:13:26.328 }, 00:13:26.328 "method": "bdev_malloc_create" 00:13:26.328 }, 00:13:26.328 { 00:13:26.328 "params": { 00:13:26.328 "io_mechanism": "io_uring", 00:13:26.328 "filename": "/dev/nullb0", 00:13:26.328 "name": "null0" 00:13:26.328 }, 00:13:26.328 "method": "bdev_xnvme_create" 00:13:26.328 }, 00:13:26.328 { 00:13:26.328 "method": "bdev_wait_for_examine" 00:13:26.328 } 00:13:26.328 ] 00:13:26.328 } 00:13:26.328 ] 00:13:26.328 } 00:13:26.328 [2024-07-13 06:00:17.967921] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:26.328 [2024-07-13 06:00:17.968333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86013 ] 00:13:26.587 [2024-07-13 06:00:18.115702] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.587 [2024-07-13 06:00:18.151505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.829  Copying: 186/1024 [MB] (186 MBps) Copying: 371/1024 [MB] (185 MBps) Copying: 559/1024 [MB] (188 MBps) Copying: 745/1024 [MB] (185 MBps) Copying: 929/1024 [MB] (184 MBps) Copying: 1024/1024 [MB] (average 186 MBps) 00:13:32.829 00:13:32.829 06:00:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:32.829 06:00:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:32.829 06:00:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:32.829 06:00:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:32.829 { 00:13:32.829 "subsystems": [ 00:13:32.829 { 00:13:32.829 "subsystem": "bdev", 00:13:32.829 "config": [ 00:13:32.829 { 00:13:32.829 "params": { 00:13:32.829 "block_size": 512, 00:13:32.829 "num_blocks": 2097152, 00:13:32.829 "name": "malloc0" 00:13:32.829 }, 00:13:32.829 "method": "bdev_malloc_create" 00:13:32.829 }, 00:13:32.829 { 00:13:32.829 "params": { 00:13:32.829 "io_mechanism": "io_uring", 00:13:32.829 "filename": "/dev/nullb0", 00:13:32.829 "name": "null0" 00:13:32.829 }, 00:13:32.829 "method": "bdev_xnvme_create" 00:13:32.829 }, 00:13:32.829 { 00:13:32.829 "method": "bdev_wait_for_examine" 00:13:32.829 } 00:13:32.829 ] 00:13:32.829 } 00:13:32.829 ] 00:13:32.829 } 00:13:32.829 [2024-07-13 06:00:24.372873] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:32.829 [2024-07-13 06:00:24.373049] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86084 ] 00:13:32.829 [2024-07-13 06:00:24.520412] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.089 [2024-07-13 06:00:24.555864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.031  Copying: 197/1024 [MB] (197 MBps) Copying: 391/1024 [MB] (194 MBps) Copying: 578/1024 [MB] (186 MBps) Copying: 768/1024 [MB] (190 MBps) Copying: 953/1024 [MB] (185 MBps) Copying: 1024/1024 [MB] (average 189 MBps) 00:13:39.031 00:13:39.031 06:00:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:13:39.031 06:00:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@195 -- # modprobe -r null_blk 00:13:39.031 ************************************ 00:13:39.031 END TEST xnvme_to_malloc_dd_copy 00:13:39.031 ************************************ 00:13:39.031 00:13:39.031 real 0m26.357s 00:13:39.031 user 0m21.272s 00:13:39.031 sys 0m4.594s 00:13:39.031 06:00:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:39.031 06:00:30 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:39.031 06:00:30 nvme_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:13:39.031 06:00:30 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:39.031 06:00:30 nvme_xnvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:39.031 06:00:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:39.031 06:00:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:39.031 ************************************ 00:13:39.031 START TEST xnvme_bdevperf 00:13:39.031 ************************************ 00:13:39.031 06:00:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1123 -- # xnvme_bdevperf 00:13:39.031 06:00:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:13:39.031 06:00:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:13:39.031 06:00:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:13:39.031 06:00:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # return 00:13:39.031 06:00:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:13:39.031 06:00:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:39.031 06:00:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:13:39.031 06:00:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:13:39.031 06:00:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:13:39.031 06:00:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:13:39.031 06:00:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:13:39.031 06:00:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:13:39.031 06:00:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:39.031 06:00:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:39.031 06:00:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:39.031 06:00:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:39.031 06:00:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:39.031 06:00:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:39.031 06:00:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:39.031 06:00:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:39.031 { 00:13:39.031 "subsystems": [ 00:13:39.031 { 00:13:39.031 "subsystem": "bdev", 00:13:39.031 "config": [ 00:13:39.031 { 00:13:39.031 "params": { 00:13:39.031 "io_mechanism": "libaio", 00:13:39.031 "filename": "/dev/nullb0", 00:13:39.031 "name": "null0" 00:13:39.031 }, 00:13:39.031 "method": "bdev_xnvme_create" 00:13:39.031 }, 00:13:39.031 { 00:13:39.031 "method": "bdev_wait_for_examine" 00:13:39.031 } 00:13:39.031 ] 00:13:39.031 } 00:13:39.031 ] 00:13:39.031 } 00:13:39.289 [2024-07-13 06:00:30.790176] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:39.289 [2024-07-13 06:00:30.790391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86183 ] 00:13:39.289 [2024-07-13 06:00:30.933882] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.289 [2024-07-13 06:00:30.968610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.603 Running I/O for 5 seconds... 00:13:44.878 00:13:44.878 Latency(us) 00:13:44.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.878 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:44.878 null0 : 5.00 128939.89 503.67 0.00 0.00 493.24 135.91 930.91 00:13:44.878 =================================================================================================================== 00:13:44.878 Total : 128939.89 503.67 0.00 0.00 493.24 135.91 930.91 00:13:44.878 06:00:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:44.878 06:00:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:44.878 06:00:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:44.878 06:00:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:44.878 06:00:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:44.878 06:00:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:44.878 { 00:13:44.878 "subsystems": [ 00:13:44.878 { 00:13:44.878 "subsystem": "bdev", 00:13:44.878 "config": [ 00:13:44.878 { 00:13:44.878 "params": { 00:13:44.878 "io_mechanism": "io_uring", 00:13:44.878 "filename": "/dev/nullb0", 00:13:44.878 "name": "null0" 00:13:44.878 }, 00:13:44.878 "method": "bdev_xnvme_create" 00:13:44.878 }, 00:13:44.878 { 00:13:44.878 "method": "bdev_wait_for_examine" 00:13:44.878 } 00:13:44.878 ] 00:13:44.878 } 00:13:44.879 ] 00:13:44.879 } 00:13:44.879 [2024-07-13 06:00:36.357821] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:44.879 [2024-07-13 06:00:36.358029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86252 ] 00:13:44.879 [2024-07-13 06:00:36.503571] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.879 [2024-07-13 06:00:36.538138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.879 Running I/O for 5 seconds... 00:13:50.156 00:13:50.156 Latency(us) 00:13:50.156 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.156 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:50.156 null0 : 5.00 174549.70 681.83 0.00 0.00 363.77 196.42 1139.43 00:13:50.156 =================================================================================================================== 00:13:50.156 Total : 174549.70 681.83 0.00 0.00 363.77 196.42 1139.43 00:13:50.156 06:00:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:50.156 06:00:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@195 -- # modprobe -r null_blk 00:13:50.156 00:13:50.156 real 0m11.138s 00:13:50.156 user 0m8.273s 00:13:50.156 sys 0m2.662s 00:13:50.156 06:00:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:50.156 ************************************ 00:13:50.156 END TEST xnvme_bdevperf 00:13:50.156 ************************************ 00:13:50.156 06:00:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:50.156 06:00:41 nvme_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:13:50.156 00:13:50.156 real 0m37.693s 00:13:50.156 user 0m29.613s 00:13:50.156 sys 0m7.370s 00:13:50.156 06:00:41 nvme_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:50.157 ************************************ 00:13:50.157 END TEST nvme_xnvme 00:13:50.157 06:00:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:50.157 ************************************ 00:13:50.414 06:00:41 -- common/autotest_common.sh@1142 -- # return 0 00:13:50.414 06:00:41 -- spdk/autotest.sh@249 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:50.414 06:00:41 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:50.414 06:00:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:50.414 06:00:41 -- common/autotest_common.sh@10 -- # set +x 00:13:50.414 ************************************ 00:13:50.414 START TEST blockdev_xnvme 00:13:50.414 ************************************ 00:13:50.414 06:00:41 blockdev_xnvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:50.414 * Looking for test storage... 00:13:50.414 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:50.414 06:00:41 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:50.414 06:00:41 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:50.414 06:00:41 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:50.414 06:00:41 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:50.414 06:00:41 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:50.414 06:00:41 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:50.414 06:00:41 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:50.414 06:00:41 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:50.414 06:00:41 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:13:50.414 06:00:41 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:13:50.414 06:00:41 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:13:50.415 06:00:41 blockdev_xnvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:13:50.415 06:00:41 blockdev_xnvme -- bdev/blockdev.sh@674 -- # uname -s 00:13:50.415 06:00:41 blockdev_xnvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:13:50.415 06:00:41 blockdev_xnvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:13:50.415 06:00:41 blockdev_xnvme -- bdev/blockdev.sh@682 -- # test_type=xnvme 00:13:50.415 06:00:41 blockdev_xnvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:13:50.415 06:00:41 blockdev_xnvme -- bdev/blockdev.sh@684 -- # dek= 00:13:50.415 06:00:41 blockdev_xnvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:13:50.415 06:00:41 blockdev_xnvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:13:50.415 06:00:41 blockdev_xnvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:13:50.415 06:00:41 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == bdev ]] 00:13:50.415 06:00:41 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == crypto_* ]] 00:13:50.415 06:00:41 blockdev_xnvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:13:50.415 06:00:42 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=86376 00:13:50.415 06:00:42 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:50.415 06:00:42 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:50.415 06:00:42 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 86376 00:13:50.415 06:00:42 blockdev_xnvme -- common/autotest_common.sh@829 -- # '[' -z 86376 ']' 00:13:50.415 06:00:42 blockdev_xnvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.415 06:00:42 blockdev_xnvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:50.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.415 06:00:42 blockdev_xnvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.415 06:00:42 blockdev_xnvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:50.415 06:00:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:50.415 [2024-07-13 06:00:42.097443] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:50.415 [2024-07-13 06:00:42.097639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86376 ] 00:13:50.672 [2024-07-13 06:00:42.235064] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.672 [2024-07-13 06:00:42.270302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.605 06:00:43 blockdev_xnvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:51.605 06:00:43 blockdev_xnvme -- common/autotest_common.sh@862 -- # return 0 00:13:51.605 06:00:43 blockdev_xnvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:13:51.605 06:00:43 blockdev_xnvme -- bdev/blockdev.sh@729 -- # setup_xnvme_conf 00:13:51.605 06:00:43 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:13:51.605 06:00:43 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:13:51.605 06:00:43 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:51.862 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:51.862 Waiting for block devices as requested 00:13:52.120 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:52.120 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:52.120 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:52.379 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:57.644 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:57.644 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1670 -- # local nvme bdf 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:57.644 06:00:48 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:13:57.644 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:57.644 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:57.644 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:57.644 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:57.645 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:57.645 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:57.645 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:57.645 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:57.645 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:57.645 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:57.645 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:57.645 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:57.645 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:57.645 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:13:57.645 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:57.645 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:57.645 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:57.645 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:13:57.645 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:57.645 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:57.645 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:57.645 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:57.645 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:57.645 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:57.645 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:57.645 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:57.645 06:00:48 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.645 06:00:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:57.645 06:00:48 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:57.645 nvme0n1 00:13:57.645 nvme1n1 00:13:57.645 nvme2n1 00:13:57.645 nvme2n2 00:13:57.645 nvme2n3 00:13:57.645 nvme3n1 00:13:57.645 06:00:49 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.645 06:00:49 blockdev_xnvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:13:57.645 06:00:49 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.645 06:00:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:57.645 06:00:49 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.645 06:00:49 blockdev_xnvme -- bdev/blockdev.sh@740 -- # cat 00:13:57.645 06:00:49 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:13:57.645 06:00:49 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.645 06:00:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:57.645 06:00:49 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.645 06:00:49 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:13:57.645 06:00:49 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.645 06:00:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:57.645 06:00:49 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.645 06:00:49 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:57.645 06:00:49 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.645 06:00:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:57.645 06:00:49 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.645 06:00:49 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:13:57.645 06:00:49 blockdev_xnvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:13:57.645 06:00:49 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.645 06:00:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:57.645 06:00:49 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:13:57.645 06:00:49 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.645 06:00:49 blockdev_xnvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:13:57.645 06:00:49 blockdev_xnvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "eb674bf8-e204-41ba-80c2-5c87711d54be"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "eb674bf8-e204-41ba-80c2-5c87711d54be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "f5a23c16-8760-4072-b8a5-2db14915d699"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "f5a23c16-8760-4072-b8a5-2db14915d699",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "10922ba2-a8bc-43fa-add2-fa4633ac606a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "10922ba2-a8bc-43fa-add2-fa4633ac606a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "2c2d75ba-9406-4a5a-a7b8-2b70fd9b6f10"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2c2d75ba-9406-4a5a-a7b8-2b70fd9b6f10",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "295b536c-908a-46ce-b161-4ff2d817e982"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "295b536c-908a-46ce-b161-4ff2d817e982",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "43999192-2e4a-4a37-baf2-cd47a68da4b7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "43999192-2e4a-4a37-baf2-cd47a68da4b7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:57.645 06:00:49 blockdev_xnvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:13:57.645 06:00:49 blockdev_xnvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:13:57.645 06:00:49 blockdev_xnvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=nvme0n1 00:13:57.645 06:00:49 blockdev_xnvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:13:57.645 06:00:49 blockdev_xnvme -- bdev/blockdev.sh@754 -- # killprocess 86376 00:13:57.645 06:00:49 blockdev_xnvme -- common/autotest_common.sh@948 -- # '[' -z 86376 ']' 00:13:57.645 06:00:49 blockdev_xnvme -- common/autotest_common.sh@952 -- # kill -0 86376 00:13:57.645 06:00:49 blockdev_xnvme -- common/autotest_common.sh@953 -- # uname 00:13:57.645 06:00:49 blockdev_xnvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:57.645 06:00:49 blockdev_xnvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86376 00:13:57.645 06:00:49 blockdev_xnvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:57.645 06:00:49 blockdev_xnvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:57.645 killing process with pid 86376 00:13:57.645 06:00:49 blockdev_xnvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86376' 00:13:57.645 06:00:49 blockdev_xnvme -- common/autotest_common.sh@967 -- # kill 86376 00:13:57.645 06:00:49 blockdev_xnvme -- common/autotest_common.sh@972 -- # wait 86376 00:13:57.904 06:00:49 blockdev_xnvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:57.904 06:00:49 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:57.904 06:00:49 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:13:57.904 06:00:49 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:57.904 06:00:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:57.904 ************************************ 00:13:57.904 START TEST bdev_hello_world 00:13:57.904 ************************************ 00:13:57.904 06:00:49 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:58.163 [2024-07-13 06:00:49.630460] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:58.163 [2024-07-13 06:00:49.630664] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86729 ] 00:13:58.163 [2024-07-13 06:00:49.777430] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.163 [2024-07-13 06:00:49.815045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.421 [2024-07-13 06:00:49.976262] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:58.421 [2024-07-13 06:00:49.976335] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:58.421 [2024-07-13 06:00:49.976390] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:58.421 [2024-07-13 06:00:49.978546] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:58.421 [2024-07-13 06:00:49.979114] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:58.421 [2024-07-13 06:00:49.979182] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:58.421 [2024-07-13 06:00:49.979444] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:58.421 00:13:58.421 [2024-07-13 06:00:49.979492] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:58.679 00:13:58.679 real 0m0.608s 00:13:58.679 user 0m0.345s 00:13:58.679 sys 0m0.156s 00:13:58.679 06:00:50 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:58.679 06:00:50 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:58.679 ************************************ 00:13:58.679 END TEST bdev_hello_world 00:13:58.679 ************************************ 00:13:58.679 06:00:50 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:13:58.679 06:00:50 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:13:58.679 06:00:50 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:58.679 06:00:50 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:58.679 06:00:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:58.679 ************************************ 00:13:58.679 START TEST bdev_bounds 00:13:58.679 ************************************ 00:13:58.679 06:00:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:13:58.679 06:00:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=86749 00:13:58.679 06:00:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:58.679 Process bdevio pid: 86749 00:13:58.679 06:00:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 86749' 00:13:58.679 06:00:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 86749 00:13:58.679 06:00:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:58.679 06:00:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 86749 ']' 00:13:58.679 06:00:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.679 06:00:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:58.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.679 06:00:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.679 06:00:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:58.679 06:00:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:58.679 [2024-07-13 06:00:50.331373] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:58.679 [2024-07-13 06:00:50.331585] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86749 ] 00:13:58.938 [2024-07-13 06:00:50.478195] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:58.938 [2024-07-13 06:00:50.518216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.938 [2024-07-13 06:00:50.518447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.938 [2024-07-13 06:00:50.518497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:59.871 06:00:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:59.871 06:00:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:13:59.871 06:00:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:59.871 I/O targets: 00:13:59.871 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:59.871 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:59.871 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:59.871 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:59.871 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:59.871 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:59.871 00:13:59.871 00:13:59.871 CUnit - A unit testing framework for C - Version 2.1-3 00:13:59.871 http://cunit.sourceforge.net/ 00:13:59.871 00:13:59.871 00:13:59.871 Suite: bdevio tests on: nvme3n1 00:13:59.871 Test: blockdev write read block ...passed 00:13:59.871 Test: blockdev write zeroes read block ...passed 00:13:59.871 Test: blockdev write zeroes read no split ...passed 00:13:59.871 Test: blockdev write zeroes read split ...passed 00:13:59.871 Test: blockdev write zeroes read split partial ...passed 00:13:59.871 Test: blockdev reset ...passed 00:13:59.871 Test: blockdev write read 8 blocks ...passed 00:13:59.871 Test: blockdev write read size > 128k ...passed 00:13:59.871 Test: blockdev write read invalid size ...passed 00:13:59.871 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:59.871 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:59.871 Test: blockdev write read max offset ...passed 00:13:59.871 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:59.871 Test: blockdev writev readv 8 blocks ...passed 00:13:59.871 Test: blockdev writev readv 30 x 1block ...passed 00:13:59.871 Test: blockdev writev readv block ...passed 00:13:59.871 Test: blockdev writev readv size > 128k ...passed 00:13:59.871 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:59.871 Test: blockdev comparev and writev ...passed 00:13:59.871 Test: blockdev nvme passthru rw ...passed 00:13:59.871 Test: blockdev nvme passthru vendor specific ...passed 00:13:59.871 Test: blockdev nvme admin passthru ...passed 00:13:59.871 Test: blockdev copy ...passed 00:13:59.871 Suite: bdevio tests on: nvme2n3 00:13:59.871 Test: blockdev write read block ...passed 00:13:59.871 Test: blockdev write zeroes read block ...passed 00:13:59.871 Test: blockdev write zeroes read no split ...passed 00:13:59.871 Test: blockdev write zeroes read split ...passed 00:13:59.871 Test: blockdev write zeroes read split partial ...passed 00:13:59.871 Test: blockdev reset ...passed 00:13:59.871 Test: blockdev write read 8 blocks ...passed 00:13:59.871 Test: blockdev write read size > 128k ...passed 00:13:59.871 Test: blockdev write read invalid size ...passed 00:13:59.871 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:59.871 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:59.871 Test: blockdev write read max offset ...passed 00:13:59.871 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:59.871 Test: blockdev writev readv 8 blocks ...passed 00:13:59.871 Test: blockdev writev readv 30 x 1block ...passed 00:13:59.871 Test: blockdev writev readv block ...passed 00:13:59.871 Test: blockdev writev readv size > 128k ...passed 00:13:59.871 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:59.871 Test: blockdev comparev and writev ...passed 00:13:59.871 Test: blockdev nvme passthru rw ...passed 00:13:59.871 Test: blockdev nvme passthru vendor specific ...passed 00:13:59.871 Test: blockdev nvme admin passthru ...passed 00:13:59.871 Test: blockdev copy ...passed 00:13:59.871 Suite: bdevio tests on: nvme2n2 00:13:59.871 Test: blockdev write read block ...passed 00:13:59.871 Test: blockdev write zeroes read block ...passed 00:13:59.871 Test: blockdev write zeroes read no split ...passed 00:13:59.871 Test: blockdev write zeroes read split ...passed 00:13:59.871 Test: blockdev write zeroes read split partial ...passed 00:13:59.871 Test: blockdev reset ...passed 00:13:59.871 Test: blockdev write read 8 blocks ...passed 00:13:59.871 Test: blockdev write read size > 128k ...passed 00:13:59.871 Test: blockdev write read invalid size ...passed 00:13:59.871 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:59.871 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:59.871 Test: blockdev write read max offset ...passed 00:13:59.871 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:59.871 Test: blockdev writev readv 8 blocks ...passed 00:13:59.871 Test: blockdev writev readv 30 x 1block ...passed 00:13:59.871 Test: blockdev writev readv block ...passed 00:13:59.871 Test: blockdev writev readv size > 128k ...passed 00:13:59.871 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:59.871 Test: blockdev comparev and writev ...passed 00:13:59.871 Test: blockdev nvme passthru rw ...passed 00:13:59.871 Test: blockdev nvme passthru vendor specific ...passed 00:13:59.871 Test: blockdev nvme admin passthru ...passed 00:13:59.871 Test: blockdev copy ...passed 00:13:59.871 Suite: bdevio tests on: nvme2n1 00:13:59.871 Test: blockdev write read block ...passed 00:13:59.871 Test: blockdev write zeroes read block ...passed 00:13:59.871 Test: blockdev write zeroes read no split ...passed 00:13:59.871 Test: blockdev write zeroes read split ...passed 00:13:59.871 Test: blockdev write zeroes read split partial ...passed 00:13:59.871 Test: blockdev reset ...passed 00:13:59.871 Test: blockdev write read 8 blocks ...passed 00:13:59.871 Test: blockdev write read size > 128k ...passed 00:13:59.871 Test: blockdev write read invalid size ...passed 00:13:59.871 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:59.871 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:59.871 Test: blockdev write read max offset ...passed 00:13:59.871 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:59.871 Test: blockdev writev readv 8 blocks ...passed 00:13:59.871 Test: blockdev writev readv 30 x 1block ...passed 00:13:59.871 Test: blockdev writev readv block ...passed 00:13:59.871 Test: blockdev writev readv size > 128k ...passed 00:13:59.871 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:59.871 Test: blockdev comparev and writev ...passed 00:13:59.871 Test: blockdev nvme passthru rw ...passed 00:13:59.871 Test: blockdev nvme passthru vendor specific ...passed 00:13:59.871 Test: blockdev nvme admin passthru ...passed 00:13:59.871 Test: blockdev copy ...passed 00:13:59.871 Suite: bdevio tests on: nvme1n1 00:13:59.871 Test: blockdev write read block ...passed 00:13:59.871 Test: blockdev write zeroes read block ...passed 00:13:59.871 Test: blockdev write zeroes read no split ...passed 00:13:59.871 Test: blockdev write zeroes read split ...passed 00:13:59.871 Test: blockdev write zeroes read split partial ...passed 00:13:59.871 Test: blockdev reset ...passed 00:13:59.871 Test: blockdev write read 8 blocks ...passed 00:13:59.871 Test: blockdev write read size > 128k ...passed 00:13:59.871 Test: blockdev write read invalid size ...passed 00:13:59.871 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:59.871 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:59.871 Test: blockdev write read max offset ...passed 00:13:59.871 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:59.871 Test: blockdev writev readv 8 blocks ...passed 00:13:59.871 Test: blockdev writev readv 30 x 1block ...passed 00:13:59.871 Test: blockdev writev readv block ...passed 00:13:59.871 Test: blockdev writev readv size > 128k ...passed 00:13:59.871 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:59.871 Test: blockdev comparev and writev ...passed 00:13:59.871 Test: blockdev nvme passthru rw ...passed 00:13:59.871 Test: blockdev nvme passthru vendor specific ...passed 00:13:59.871 Test: blockdev nvme admin passthru ...passed 00:13:59.871 Test: blockdev copy ...passed 00:13:59.871 Suite: bdevio tests on: nvme0n1 00:13:59.871 Test: blockdev write read block ...passed 00:13:59.871 Test: blockdev write zeroes read block ...passed 00:13:59.871 Test: blockdev write zeroes read no split ...passed 00:13:59.871 Test: blockdev write zeroes read split ...passed 00:13:59.871 Test: blockdev write zeroes read split partial ...passed 00:13:59.871 Test: blockdev reset ...passed 00:13:59.871 Test: blockdev write read 8 blocks ...passed 00:13:59.871 Test: blockdev write read size > 128k ...passed 00:13:59.871 Test: blockdev write read invalid size ...passed 00:13:59.871 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:59.871 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:59.871 Test: blockdev write read max offset ...passed 00:13:59.871 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:59.871 Test: blockdev writev readv 8 blocks ...passed 00:13:59.871 Test: blockdev writev readv 30 x 1block ...passed 00:13:59.871 Test: blockdev writev readv block ...passed 00:13:59.871 Test: blockdev writev readv size > 128k ...passed 00:13:59.871 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:59.871 Test: blockdev comparev and writev ...passed 00:13:59.871 Test: blockdev nvme passthru rw ...passed 00:13:59.871 Test: blockdev nvme passthru vendor specific ...passed 00:13:59.871 Test: blockdev nvme admin passthru ...passed 00:13:59.871 Test: blockdev copy ...passed 00:13:59.871 00:13:59.871 Run Summary: Type Total Ran Passed Failed Inactive 00:13:59.871 suites 6 6 n/a 0 0 00:13:59.871 tests 138 138 138 0 0 00:13:59.871 asserts 780 780 780 0 n/a 00:13:59.871 00:13:59.871 Elapsed time = 0.300 seconds 00:13:59.871 0 00:13:59.871 06:00:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 86749 00:13:59.871 06:00:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 86749 ']' 00:13:59.871 06:00:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 86749 00:13:59.871 06:00:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:13:59.871 06:00:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:59.871 06:00:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86749 00:13:59.871 06:00:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:59.871 06:00:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:59.872 06:00:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86749' 00:13:59.872 killing process with pid 86749 00:13:59.872 06:00:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 86749 00:13:59.872 06:00:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 86749 00:14:00.130 06:00:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:14:00.130 00:14:00.130 real 0m1.504s 00:14:00.130 user 0m3.766s 00:14:00.130 sys 0m0.329s 00:14:00.130 06:00:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:00.130 06:00:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:00.130 ************************************ 00:14:00.130 END TEST bdev_bounds 00:14:00.130 ************************************ 00:14:00.130 06:00:51 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:14:00.130 06:00:51 blockdev_xnvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:00.130 06:00:51 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:00.130 06:00:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:00.130 06:00:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.130 ************************************ 00:14:00.130 START TEST bdev_nbd 00:14:00.130 ************************************ 00:14:00.130 06:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:00.130 06:00:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:14:00.130 06:00:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:14:00.130 06:00:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:00.130 06:00:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:00.130 06:00:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:00.130 06:00:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:14:00.130 06:00:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:14:00.130 06:00:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:14:00.130 06:00:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:00.130 06:00:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:14:00.130 06:00:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:14:00.130 06:00:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:00.130 06:00:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:14:00.130 06:00:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:00.130 06:00:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:14:00.130 06:00:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=86802 00:14:00.130 06:00:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:00.130 06:00:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 86802 /var/tmp/spdk-nbd.sock 00:14:00.130 06:00:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:00.130 06:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 86802 ']' 00:14:00.130 06:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:00.130 06:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:00.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:00.130 06:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:00.131 06:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:00.131 06:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:00.131 [2024-07-13 06:00:51.850715] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:00.131 [2024-07-13 06:00:51.850915] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.389 [2024-07-13 06:00:52.000458] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.389 [2024-07-13 06:00:52.037596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.321 06:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:01.321 06:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:14:01.321 06:00:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:01.321 06:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:01.321 06:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:01.321 06:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:01.321 06:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:01.321 06:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:01.321 06:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:01.321 06:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:01.321 06:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:14:01.321 06:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:01.321 06:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:01.321 06:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:01.321 06:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:14:01.321 06:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:01.321 06:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:01.321 06:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:01.321 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:14:01.321 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:01.321 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:01.321 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:01.321 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:14:01.321 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:01.321 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:01.321 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:01.321 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.580 1+0 records in 00:14:01.580 1+0 records out 00:14:01.580 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501394 s, 8.2 MB/s 00:14:01.580 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.580 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:01.580 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.580 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:01.580 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:01.580 06:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:01.580 06:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:01.580 06:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:14:01.580 06:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:01.580 06:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:01.580 06:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:01.580 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:14:01.580 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:01.580 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:01.580 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:01.580 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:14:01.580 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:01.580 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:01.580 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:01.580 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.580 1+0 records in 00:14:01.580 1+0 records out 00:14:01.580 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536877 s, 7.6 MB/s 00:14:01.580 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.580 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:01.580 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.580 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:01.580 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:01.580 06:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:01.580 06:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:01.580 06:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:14:02.183 06:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:02.183 06:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:02.183 06:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:02.183 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:14:02.183 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:02.183 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:02.183 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:02.183 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:14:02.183 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:02.183 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:02.183 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:02.183 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.183 1+0 records in 00:14:02.183 1+0 records out 00:14:02.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000768472 s, 5.3 MB/s 00:14:02.183 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.183 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:02.183 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.183 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:02.183 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:02.183 06:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:02.183 06:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:02.183 06:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:14:02.183 06:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:02.184 06:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:02.184 06:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:02.184 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:14:02.184 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:02.184 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:02.184 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:02.184 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:14:02.447 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:02.447 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:02.447 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:02.447 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.448 1+0 records in 00:14:02.448 1+0 records out 00:14:02.448 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521435 s, 7.9 MB/s 00:14:02.448 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.448 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:02.448 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.448 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:02.448 06:00:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:02.448 06:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:02.448 06:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:02.448 06:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:14:02.448 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:02.448 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:02.705 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:02.705 06:00:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:14:02.705 06:00:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:02.705 06:00:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:02.705 06:00:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:02.705 06:00:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:14:02.705 06:00:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:02.705 06:00:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:02.705 06:00:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:02.705 06:00:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.705 1+0 records in 00:14:02.705 1+0 records out 00:14:02.705 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000828254 s, 4.9 MB/s 00:14:02.705 06:00:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.705 06:00:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:02.705 06:00:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.705 06:00:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:02.705 06:00:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:02.705 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:02.705 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:02.705 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:14:02.705 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:02.705 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:02.963 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:02.963 06:00:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:14:02.963 06:00:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:02.963 06:00:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:02.963 06:00:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:02.963 06:00:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:14:02.963 06:00:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:02.963 06:00:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:02.963 06:00:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:02.963 06:00:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:02.963 1+0 records in 00:14:02.963 1+0 records out 00:14:02.963 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000734604 s, 5.6 MB/s 00:14:02.963 06:00:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.963 06:00:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:02.964 06:00:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:02.964 06:00:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:02.964 06:00:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:02.964 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:02.964 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:02.964 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:03.222 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:03.222 { 00:14:03.222 "nbd_device": "/dev/nbd0", 00:14:03.222 "bdev_name": "nvme0n1" 00:14:03.222 }, 00:14:03.222 { 00:14:03.222 "nbd_device": "/dev/nbd1", 00:14:03.222 "bdev_name": "nvme1n1" 00:14:03.222 }, 00:14:03.222 { 00:14:03.222 "nbd_device": "/dev/nbd2", 00:14:03.222 "bdev_name": "nvme2n1" 00:14:03.222 }, 00:14:03.222 { 00:14:03.222 "nbd_device": "/dev/nbd3", 00:14:03.222 "bdev_name": "nvme2n2" 00:14:03.222 }, 00:14:03.222 { 00:14:03.222 "nbd_device": "/dev/nbd4", 00:14:03.222 "bdev_name": "nvme2n3" 00:14:03.222 }, 00:14:03.222 { 00:14:03.222 "nbd_device": "/dev/nbd5", 00:14:03.222 "bdev_name": "nvme3n1" 00:14:03.222 } 00:14:03.222 ]' 00:14:03.222 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:03.222 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:03.222 { 00:14:03.222 "nbd_device": "/dev/nbd0", 00:14:03.222 "bdev_name": "nvme0n1" 00:14:03.222 }, 00:14:03.222 { 00:14:03.222 "nbd_device": "/dev/nbd1", 00:14:03.222 "bdev_name": "nvme1n1" 00:14:03.222 }, 00:14:03.222 { 00:14:03.222 "nbd_device": "/dev/nbd2", 00:14:03.222 "bdev_name": "nvme2n1" 00:14:03.222 }, 00:14:03.222 { 00:14:03.222 "nbd_device": "/dev/nbd3", 00:14:03.222 "bdev_name": "nvme2n2" 00:14:03.222 }, 00:14:03.222 { 00:14:03.222 "nbd_device": "/dev/nbd4", 00:14:03.222 "bdev_name": "nvme2n3" 00:14:03.222 }, 00:14:03.222 { 00:14:03.222 "nbd_device": "/dev/nbd5", 00:14:03.222 "bdev_name": "nvme3n1" 00:14:03.222 } 00:14:03.222 ]' 00:14:03.222 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:03.222 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:14:03.222 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:03.222 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:14:03.222 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:03.222 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:03.222 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.222 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:03.480 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:03.480 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:03.480 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:03.480 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:03.480 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:03.480 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:03.480 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:03.480 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:03.480 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.480 06:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:03.738 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:03.738 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:03.738 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:03.738 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:03.738 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:03.738 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:03.738 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:03.738 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:03.738 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.738 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:03.997 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:03.997 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:03.997 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:03.997 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:03.997 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:03.997 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:03.997 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:03.997 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:03.997 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.997 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:03.997 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:03.997 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:03.997 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:03.997 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:03.997 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:03.997 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:04.254 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:04.254 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:04.254 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:04.254 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:04.254 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:04.254 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:04.254 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:04.254 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:04.254 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:04.254 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:04.254 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:04.254 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:04.254 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:04.254 06:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:04.513 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:04.513 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:04.513 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:04.513 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:04.513 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:04.513 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:04.513 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:04.513 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:04.513 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:04.513 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:04.513 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:04.772 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:04.772 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:04.772 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:04.772 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:04.772 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:04.772 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:04.772 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:04.772 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:04.772 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:04.772 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:14:04.772 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:04.772 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:14:04.772 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:04.772 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:04.772 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:04.772 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:04.772 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:04.772 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:04.772 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:04.772 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:04.772 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:04.772 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:04.772 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:04.772 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:04.772 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:14:04.772 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:05.030 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:05.030 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:14:05.289 /dev/nbd0 00:14:05.289 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:05.289 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:05.289 06:00:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:14:05.289 06:00:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:05.289 06:00:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:05.289 06:00:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:05.289 06:00:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:14:05.289 06:00:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:05.289 06:00:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:05.289 06:00:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:05.289 06:00:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:05.289 1+0 records in 00:14:05.289 1+0 records out 00:14:05.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000660062 s, 6.2 MB/s 00:14:05.289 06:00:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.289 06:00:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:05.289 06:00:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.289 06:00:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:05.289 06:00:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:05.289 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:05.289 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:05.289 06:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:14:05.548 /dev/nbd1 00:14:05.548 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:05.548 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:05.548 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:14:05.548 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:05.548 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:05.548 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:05.548 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:14:05.548 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:05.548 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:05.548 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:05.548 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:05.548 1+0 records in 00:14:05.548 1+0 records out 00:14:05.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000766752 s, 5.3 MB/s 00:14:05.548 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.548 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:05.548 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.548 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:05.548 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:05.548 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:05.548 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:05.548 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:14:05.805 /dev/nbd10 00:14:05.805 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:05.805 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:05.805 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:14:05.805 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:05.805 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:05.805 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:05.805 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:14:05.805 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:05.805 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:05.805 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:05.805 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:05.805 1+0 records in 00:14:05.805 1+0 records out 00:14:05.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000624192 s, 6.6 MB/s 00:14:05.805 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.805 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:05.805 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.805 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:05.805 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:05.805 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:05.805 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:05.805 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:14:06.062 /dev/nbd11 00:14:06.062 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:06.062 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:06.062 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:14:06.062 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:06.062 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:06.062 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:06.062 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:14:06.062 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:06.062 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:06.062 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:06.062 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.062 1+0 records in 00:14:06.062 1+0 records out 00:14:06.062 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000664317 s, 6.2 MB/s 00:14:06.062 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.062 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:06.062 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.062 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:06.062 06:00:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:06.062 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.062 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:06.062 06:00:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:14:06.320 /dev/nbd12 00:14:06.320 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:06.577 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:06.577 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:14:06.577 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:06.577 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:06.577 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:06.577 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:14:06.577 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:06.577 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:06.577 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:06.577 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.577 1+0 records in 00:14:06.577 1+0 records out 00:14:06.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000698329 s, 5.9 MB/s 00:14:06.577 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.577 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:06.577 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.577 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:06.577 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:06.577 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.577 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:06.577 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:14:06.577 /dev/nbd13 00:14:06.577 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:06.577 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:06.577 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:14:06.577 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:14:06.577 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:06.834 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:06.834 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:14:06.834 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:14:06.834 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:06.834 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:06.834 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.834 1+0 records in 00:14:06.834 1+0 records out 00:14:06.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000946993 s, 4.3 MB/s 00:14:06.834 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.834 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:14:06.834 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.834 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:06.834 06:00:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:14:06.834 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.834 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:06.834 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:06.834 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:06.834 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:07.092 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:07.092 { 00:14:07.092 "nbd_device": "/dev/nbd0", 00:14:07.092 "bdev_name": "nvme0n1" 00:14:07.092 }, 00:14:07.092 { 00:14:07.092 "nbd_device": "/dev/nbd1", 00:14:07.092 "bdev_name": "nvme1n1" 00:14:07.092 }, 00:14:07.092 { 00:14:07.092 "nbd_device": "/dev/nbd10", 00:14:07.092 "bdev_name": "nvme2n1" 00:14:07.092 }, 00:14:07.092 { 00:14:07.092 "nbd_device": "/dev/nbd11", 00:14:07.092 "bdev_name": "nvme2n2" 00:14:07.092 }, 00:14:07.092 { 00:14:07.092 "nbd_device": "/dev/nbd12", 00:14:07.092 "bdev_name": "nvme2n3" 00:14:07.092 }, 00:14:07.092 { 00:14:07.092 "nbd_device": "/dev/nbd13", 00:14:07.092 "bdev_name": "nvme3n1" 00:14:07.092 } 00:14:07.092 ]' 00:14:07.092 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:07.092 { 00:14:07.092 "nbd_device": "/dev/nbd0", 00:14:07.092 "bdev_name": "nvme0n1" 00:14:07.092 }, 00:14:07.092 { 00:14:07.092 "nbd_device": "/dev/nbd1", 00:14:07.092 "bdev_name": "nvme1n1" 00:14:07.092 }, 00:14:07.092 { 00:14:07.092 "nbd_device": "/dev/nbd10", 00:14:07.092 "bdev_name": "nvme2n1" 00:14:07.092 }, 00:14:07.092 { 00:14:07.092 "nbd_device": "/dev/nbd11", 00:14:07.092 "bdev_name": "nvme2n2" 00:14:07.092 }, 00:14:07.092 { 00:14:07.092 "nbd_device": "/dev/nbd12", 00:14:07.092 "bdev_name": "nvme2n3" 00:14:07.092 }, 00:14:07.092 { 00:14:07.092 "nbd_device": "/dev/nbd13", 00:14:07.092 "bdev_name": "nvme3n1" 00:14:07.092 } 00:14:07.092 ]' 00:14:07.092 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:07.092 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:07.092 /dev/nbd1 00:14:07.092 /dev/nbd10 00:14:07.092 /dev/nbd11 00:14:07.092 /dev/nbd12 00:14:07.092 /dev/nbd13' 00:14:07.092 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:07.092 /dev/nbd1 00:14:07.092 /dev/nbd10 00:14:07.092 /dev/nbd11 00:14:07.092 /dev/nbd12 00:14:07.092 /dev/nbd13' 00:14:07.092 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:07.092 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:14:07.092 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:14:07.092 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:14:07.092 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:14:07.092 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:14:07.092 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:07.092 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:07.092 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:07.092 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:07.092 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:07.092 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:07.092 256+0 records in 00:14:07.092 256+0 records out 00:14:07.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00928939 s, 113 MB/s 00:14:07.092 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:07.092 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:07.351 256+0 records in 00:14:07.351 256+0 records out 00:14:07.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165372 s, 6.3 MB/s 00:14:07.351 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:07.351 06:00:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:07.351 256+0 records in 00:14:07.351 256+0 records out 00:14:07.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.179071 s, 5.9 MB/s 00:14:07.351 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:07.351 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:07.610 256+0 records in 00:14:07.610 256+0 records out 00:14:07.610 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174082 s, 6.0 MB/s 00:14:07.610 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:07.610 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:07.867 256+0 records in 00:14:07.867 256+0 records out 00:14:07.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.173983 s, 6.0 MB/s 00:14:07.867 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:07.867 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:07.867 256+0 records in 00:14:07.867 256+0 records out 00:14:07.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148526 s, 7.1 MB/s 00:14:07.867 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:07.867 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:08.125 256+0 records in 00:14:08.125 256+0 records out 00:14:08.125 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.173866 s, 6.0 MB/s 00:14:08.125 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:14:08.125 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:08.125 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:08.125 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:08.125 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:08.125 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:08.125 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:08.125 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:08.125 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:08.125 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:08.125 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:08.125 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:08.125 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:08.125 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:08.125 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:08.126 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:08.126 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:08.126 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:08.126 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:08.126 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:08.126 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:08.126 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:08.126 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:08.126 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:08.126 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:08.126 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.126 06:00:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:08.384 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:08.384 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:08.384 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:08.384 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.384 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.384 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:08.384 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:08.384 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.384 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.384 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:08.642 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:08.642 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:08.642 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:08.642 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.642 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.642 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:08.642 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:08.642 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.642 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.642 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:08.900 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:08.900 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:08.900 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:08.900 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.900 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.900 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:08.900 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:08.900 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.900 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.900 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:09.160 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:09.160 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:09.160 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:09.160 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.160 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.160 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:09.160 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:09.160 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.160 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.160 06:01:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:09.419 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:09.419 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:09.419 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:09.419 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.419 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.419 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:09.419 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:09.419 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.419 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:09.419 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:09.677 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:09.677 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:09.677 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:09.677 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:09.677 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:09.677 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:09.677 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:09.677 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:09.677 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:09.677 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:09.677 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:09.934 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:09.934 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:09.934 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:09.934 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:09.934 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:09.934 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:09.934 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:09.934 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:09.934 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:09.934 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:14:09.934 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:09.934 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:14:09.934 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:09.934 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:09.934 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:09.934 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:14:09.934 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:14:09.934 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:10.192 malloc_lvol_verify 00:14:10.192 06:01:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:10.450 c32fa499-d63e-4920-965e-778efe9c912b 00:14:10.450 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:10.709 077f6a7a-30bb-4bd6-b8cb-dbd7d82f64b6 00:14:10.709 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:10.966 /dev/nbd0 00:14:10.966 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:14:10.966 mke2fs 1.46.5 (30-Dec-2021) 00:14:10.966 Discarding device blocks: 0/4096 done 00:14:10.966 Creating filesystem with 4096 1k blocks and 1024 inodes 00:14:10.966 00:14:10.966 Allocating group tables: 0/1 done 00:14:10.966 Writing inode tables: 0/1 done 00:14:10.966 Creating journal (1024 blocks): done 00:14:10.966 Writing superblocks and filesystem accounting information: 0/1 done 00:14:10.966 00:14:10.966 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:14:10.966 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:10.966 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:10.966 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:10.966 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:10.966 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:10.966 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:10.966 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:11.224 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:11.224 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:11.224 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:11.224 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.224 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.224 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:11.224 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:11.224 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.224 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:14:11.224 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:14:11.224 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 86802 00:14:11.224 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 86802 ']' 00:14:11.224 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 86802 00:14:11.224 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:14:11.224 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:11.224 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86802 00:14:11.224 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:11.224 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:11.224 killing process with pid 86802 00:14:11.224 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86802' 00:14:11.224 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 86802 00:14:11.224 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 86802 00:14:11.482 06:01:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:14:11.482 00:14:11.482 real 0m11.240s 00:14:11.482 user 0m16.024s 00:14:11.482 sys 0m4.040s 00:14:11.482 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:11.482 06:01:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:11.482 ************************************ 00:14:11.482 END TEST bdev_nbd 00:14:11.482 ************************************ 00:14:11.482 06:01:03 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:14:11.482 06:01:03 blockdev_xnvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:14:11.482 06:01:03 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = nvme ']' 00:14:11.482 06:01:03 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = gpt ']' 00:14:11.482 06:01:03 blockdev_xnvme -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:14:11.482 06:01:03 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:11.482 06:01:03 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:11.482 06:01:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:11.482 ************************************ 00:14:11.482 START TEST bdev_fio 00:14:11.482 ************************************ 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:14:11.482 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n1]' 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n1 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme1n1]' 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme1n1 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n1]' 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n1 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n2]' 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n2 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n3]' 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n3 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme3n1]' 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme3n1 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:11.482 ************************************ 00:14:11.482 START TEST bdev_fio_rw_verify 00:14:11.482 ************************************ 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:14:11.482 06:01:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:11.483 06:01:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:11.740 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:11.740 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:11.740 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:11.740 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:11.740 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:11.740 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:11.740 fio-3.35 00:14:11.740 Starting 6 threads 00:14:23.949 00:14:23.949 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=87209: Sat Jul 13 06:01:13 2024 00:14:23.949 read: IOPS=28.0k, BW=109MiB/s (115MB/s)(1093MiB/10001msec) 00:14:23.949 slat (usec): min=3, max=813, avg= 7.22, stdev= 4.20 00:14:23.949 clat (usec): min=111, max=4110, avg=680.23, stdev=212.65 00:14:23.949 lat (usec): min=118, max=4119, avg=687.45, stdev=213.41 00:14:23.949 clat percentiles (usec): 00:14:23.949 | 50.000th=[ 717], 99.000th=[ 1172], 99.900th=[ 1631], 99.990th=[ 3785], 00:14:23.949 | 99.999th=[ 4080] 00:14:23.949 write: IOPS=28.3k, BW=110MiB/s (116MB/s)(1105MiB/10001msec); 0 zone resets 00:14:23.949 slat (usec): min=13, max=1541, avg=25.44, stdev=23.75 00:14:23.949 clat (usec): min=93, max=4334, avg=757.52, stdev=222.92 00:14:23.949 lat (usec): min=112, max=4367, avg=782.96, stdev=224.67 00:14:23.949 clat percentiles (usec): 00:14:23.949 | 50.000th=[ 775], 99.000th=[ 1369], 99.900th=[ 1991], 99.990th=[ 3261], 00:14:23.949 | 99.999th=[ 4293] 00:14:23.949 bw ( KiB/s): min=98320, max=141486, per=100.00%, avg=113208.32, stdev=2155.79, samples=114 00:14:23.949 iops : min=24580, max=35367, avg=28302.00, stdev=538.88, samples=114 00:14:23.949 lat (usec) : 100=0.01%, 250=2.10%, 500=13.91%, 750=36.36%, 1000=40.83% 00:14:23.949 lat (msec) : 2=6.74%, 4=0.06%, 10=0.01% 00:14:23.949 cpu : usr=62.65%, sys=25.13%, ctx=6966, majf=0, minf=25783 00:14:23.949 IO depths : 1=12.0%, 2=24.5%, 4=50.5%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:23.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.949 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.949 issued rwts: total=279694,282896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:23.949 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:23.949 00:14:23.949 Run status group 0 (all jobs): 00:14:23.949 READ: bw=109MiB/s (115MB/s), 109MiB/s-109MiB/s (115MB/s-115MB/s), io=1093MiB (1146MB), run=10001-10001msec 00:14:23.949 WRITE: bw=110MiB/s (116MB/s), 110MiB/s-110MiB/s (116MB/s-116MB/s), io=1105MiB (1159MB), run=10001-10001msec 00:14:23.949 ----------------------------------------------------- 00:14:23.949 Suppressions used: 00:14:23.949 count bytes template 00:14:23.949 6 48 /usr/src/fio/parse.c 00:14:23.949 3002 288192 /usr/src/fio/iolog.c 00:14:23.949 1 8 libtcmalloc_minimal.so 00:14:23.949 1 904 libcrypto.so 00:14:23.949 ----------------------------------------------------- 00:14:23.949 00:14:23.949 00:14:23.949 real 0m11.146s 00:14:23.949 user 0m38.325s 00:14:23.949 sys 0m15.373s 00:14:23.949 06:01:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:23.949 06:01:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:14:23.949 ************************************ 00:14:23.949 END TEST bdev_fio_rw_verify 00:14:23.949 ************************************ 00:14:23.949 06:01:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:14:23.949 06:01:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:14:23.949 06:01:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:23.949 06:01:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:14:23.949 06:01:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:23.949 06:01:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:14:23.949 06:01:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:14:23.949 06:01:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:14:23.949 06:01:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:14:23.949 06:01:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:23.949 06:01:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:14:23.949 06:01:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:14:23.949 06:01:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:23.949 06:01:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:14:23.949 06:01:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:14:23.949 06:01:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:14:23.949 06:01:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:14:23.949 06:01:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:23.950 06:01:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "eb674bf8-e204-41ba-80c2-5c87711d54be"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "eb674bf8-e204-41ba-80c2-5c87711d54be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "f5a23c16-8760-4072-b8a5-2db14915d699"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "f5a23c16-8760-4072-b8a5-2db14915d699",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "10922ba2-a8bc-43fa-add2-fa4633ac606a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "10922ba2-a8bc-43fa-add2-fa4633ac606a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "2c2d75ba-9406-4a5a-a7b8-2b70fd9b6f10"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2c2d75ba-9406-4a5a-a7b8-2b70fd9b6f10",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "295b536c-908a-46ce-b161-4ff2d817e982"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "295b536c-908a-46ce-b161-4ff2d817e982",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "43999192-2e4a-4a37-baf2-cd47a68da4b7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "43999192-2e4a-4a37-baf2-cd47a68da4b7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:14:23.950 06:01:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:14:23.950 06:01:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:23.950 /home/vagrant/spdk_repo/spdk 00:14:23.950 06:01:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # popd 00:14:23.950 06:01:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:14:23.950 06:01:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@364 -- # return 0 00:14:23.950 00:14:23.950 real 0m11.320s 00:14:23.950 user 0m38.425s 00:14:23.950 sys 0m15.443s 00:14:23.950 06:01:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:23.950 06:01:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:23.950 ************************************ 00:14:23.950 END TEST bdev_fio 00:14:23.950 ************************************ 00:14:23.950 06:01:14 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:14:23.950 06:01:14 blockdev_xnvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:23.950 06:01:14 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:23.950 06:01:14 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:14:23.950 06:01:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:23.950 06:01:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:23.950 ************************************ 00:14:23.950 START TEST bdev_verify 00:14:23.950 ************************************ 00:14:23.950 06:01:14 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:23.950 [2024-07-13 06:01:14.489835] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:23.950 [2024-07-13 06:01:14.490076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87369 ] 00:14:23.950 [2024-07-13 06:01:14.638077] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:23.950 [2024-07-13 06:01:14.682149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.950 [2024-07-13 06:01:14.682189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.950 Running I/O for 5 seconds... 00:14:29.214 00:14:29.214 Latency(us) 00:14:29.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.214 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:29.214 Verification LBA range: start 0x0 length 0xa0000 00:14:29.214 nvme0n1 : 5.07 1641.83 6.41 0.00 0.00 77813.73 9234.62 75783.45 00:14:29.214 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:29.214 Verification LBA range: start 0xa0000 length 0xa0000 00:14:29.214 nvme0n1 : 5.07 1514.14 5.91 0.00 0.00 84368.57 14477.50 89605.59 00:14:29.214 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:29.214 Verification LBA range: start 0x0 length 0xbd0bd 00:14:29.214 nvme1n1 : 5.07 2910.53 11.37 0.00 0.00 43689.39 5123.72 73876.95 00:14:29.214 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:29.214 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:14:29.214 nvme1n1 : 5.06 2730.07 10.66 0.00 0.00 46611.79 4736.47 77213.32 00:14:29.214 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:29.214 Verification LBA range: start 0x0 length 0x80000 00:14:29.214 nvme2n1 : 5.06 1643.69 6.42 0.00 0.00 77334.27 7745.16 74830.20 00:14:29.214 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:29.214 Verification LBA range: start 0x80000 length 0x80000 00:14:29.214 nvme2n1 : 5.06 1518.01 5.93 0.00 0.00 83631.58 8579.26 81979.58 00:14:29.214 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:29.214 Verification LBA range: start 0x0 length 0x80000 00:14:29.214 nvme2n2 : 5.06 1643.23 6.42 0.00 0.00 77226.86 7804.74 64821.06 00:14:29.214 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:29.214 Verification LBA range: start 0x80000 length 0x80000 00:14:29.214 nvme2n2 : 5.07 1513.34 5.91 0.00 0.00 83701.21 13047.62 69110.69 00:14:29.214 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:29.214 Verification LBA range: start 0x0 length 0x80000 00:14:29.214 nvme2n3 : 5.07 1642.52 6.42 0.00 0.00 77118.27 8460.10 73400.32 00:14:29.214 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:29.214 Verification LBA range: start 0x80000 length 0x80000 00:14:29.214 nvme2n3 : 5.08 1512.78 5.91 0.00 0.00 83551.39 12213.53 79596.45 00:14:29.214 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:29.214 Verification LBA range: start 0x0 length 0x20000 00:14:29.214 nvme3n1 : 5.07 1641.08 6.41 0.00 0.00 77054.86 9175.04 81502.95 00:14:29.214 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:29.214 Verification LBA range: start 0x20000 length 0x20000 00:14:29.214 nvme3n1 : 5.08 1536.04 6.00 0.00 0.00 82098.58 3232.12 81026.33 00:14:29.214 =================================================================================================================== 00:14:29.214 Total : 21447.25 83.78 0.00 0.00 71025.11 3232.12 89605.59 00:14:29.214 00:14:29.214 real 0m5.789s 00:14:29.214 user 0m8.982s 00:14:29.214 sys 0m1.702s 00:14:29.215 06:01:20 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:29.215 06:01:20 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:14:29.215 ************************************ 00:14:29.215 END TEST bdev_verify 00:14:29.215 ************************************ 00:14:29.215 06:01:20 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:14:29.215 06:01:20 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:29.215 06:01:20 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:14:29.215 06:01:20 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:29.215 06:01:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:29.215 ************************************ 00:14:29.215 START TEST bdev_verify_big_io 00:14:29.215 ************************************ 00:14:29.215 06:01:20 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:29.215 [2024-07-13 06:01:20.339350] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:29.215 [2024-07-13 06:01:20.339551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87463 ] 00:14:29.215 [2024-07-13 06:01:20.487304] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:29.215 [2024-07-13 06:01:20.522550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.215 [2024-07-13 06:01:20.522624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.215 Running I/O for 5 seconds... 00:14:35.807 00:14:35.807 Latency(us) 00:14:35.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.807 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:35.807 Verification LBA range: start 0x0 length 0xa000 00:14:35.807 nvme0n1 : 6.02 130.23 8.14 0.00 0.00 958854.98 24665.37 1014258.97 00:14:35.807 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:35.807 Verification LBA range: start 0xa000 length 0xa000 00:14:35.807 nvme0n1 : 6.01 117.18 7.32 0.00 0.00 1071651.04 28240.06 1258291.20 00:14:35.807 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:35.807 Verification LBA range: start 0x0 length 0xbd0b 00:14:35.807 nvme1n1 : 6.01 138.45 8.65 0.00 0.00 877105.70 85315.96 1197283.14 00:14:35.807 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:35.807 Verification LBA range: start 0xbd0b length 0xbd0b 00:14:35.807 nvme1n1 : 6.01 148.98 9.31 0.00 0.00 817736.48 27882.59 846486.81 00:14:35.807 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:35.807 Verification LBA range: start 0x0 length 0x8000 00:14:35.807 nvme2n1 : 5.99 138.85 8.68 0.00 0.00 849247.45 78166.57 823608.79 00:14:35.807 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:35.807 Verification LBA range: start 0x8000 length 0x8000 00:14:35.807 nvme2n1 : 6.02 142.25 8.89 0.00 0.00 831055.57 32172.22 850299.81 00:14:35.807 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:35.807 Verification LBA range: start 0x0 length 0x8000 00:14:35.807 nvme2n2 : 6.00 104.06 6.50 0.00 0.00 1098995.18 77213.32 2287802.18 00:14:35.807 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:35.807 Verification LBA range: start 0x8000 length 0x8000 00:14:35.807 nvme2n2 : 6.02 138.25 8.64 0.00 0.00 828596.45 20733.21 1189657.13 00:14:35.807 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:35.807 Verification LBA range: start 0x0 length 0x8000 00:14:35.807 nvme2n3 : 6.00 114.68 7.17 0.00 0.00 966726.66 75783.45 2043769.95 00:14:35.807 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:35.807 Verification LBA range: start 0x8000 length 0x8000 00:14:35.807 nvme2n3 : 6.03 99.55 6.22 0.00 0.00 1114112.00 27525.12 2501330.39 00:14:35.807 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:35.807 Verification LBA range: start 0x0 length 0x2000 00:14:35.807 nvme3n1 : 6.02 124.98 7.81 0.00 0.00 864580.83 9711.24 1662469.59 00:14:35.807 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:35.807 Verification LBA range: start 0x2000 length 0x2000 00:14:35.807 nvme3n1 : 6.02 114.26 7.14 0.00 0.00 939665.18 15728.64 2150534.05 00:14:35.807 =================================================================================================================== 00:14:35.807 Total : 1511.72 94.48 0.00 0.00 923403.21 9711.24 2501330.39 00:14:35.807 00:14:35.807 real 0m6.761s 00:14:35.807 user 0m12.338s 00:14:35.807 sys 0m0.488s 00:14:35.807 06:01:27 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:35.807 ************************************ 00:14:35.807 END TEST bdev_verify_big_io 00:14:35.807 ************************************ 00:14:35.807 06:01:27 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.807 06:01:27 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:14:35.807 06:01:27 blockdev_xnvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:35.807 06:01:27 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:14:35.807 06:01:27 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:35.807 06:01:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:35.807 ************************************ 00:14:35.807 START TEST bdev_write_zeroes 00:14:35.807 ************************************ 00:14:35.807 06:01:27 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:35.807 [2024-07-13 06:01:27.139536] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:35.807 [2024-07-13 06:01:27.139692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87558 ] 00:14:35.807 [2024-07-13 06:01:27.280921] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.807 [2024-07-13 06:01:27.317064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.807 Running I/O for 1 seconds... 00:14:37.180 00:14:37.180 Latency(us) 00:14:37.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.180 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:37.180 nvme0n1 : 1.01 11043.26 43.14 0.00 0.00 11576.89 7149.38 20256.58 00:14:37.180 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:37.180 nvme1n1 : 1.01 17030.67 66.53 0.00 0.00 7483.95 4289.63 15252.01 00:14:37.180 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:37.180 nvme2n1 : 1.02 11067.70 43.23 0.00 0.00 11474.23 6732.33 18826.71 00:14:37.180 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:37.180 nvme2n2 : 1.02 11051.55 43.17 0.00 0.00 11481.72 7030.23 19184.17 00:14:37.180 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:37.180 nvme2n3 : 1.02 11035.12 43.11 0.00 0.00 11488.21 7119.59 19422.49 00:14:37.180 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:37.180 nvme3n1 : 1.02 11019.08 43.04 0.00 0.00 11497.22 7119.59 19541.64 00:14:37.180 =================================================================================================================== 00:14:37.180 Total : 72247.38 282.22 0.00 0.00 10558.12 4289.63 20256.58 00:14:37.180 ************************************ 00:14:37.180 END TEST bdev_write_zeroes 00:14:37.180 ************************************ 00:14:37.180 00:14:37.180 real 0m1.681s 00:14:37.180 user 0m0.954s 00:14:37.180 sys 0m0.552s 00:14:37.180 06:01:28 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:37.180 06:01:28 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:37.180 06:01:28 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 0 00:14:37.180 06:01:28 blockdev_xnvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:37.180 06:01:28 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:14:37.180 06:01:28 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:37.180 06:01:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:37.180 ************************************ 00:14:37.180 START TEST bdev_json_nonenclosed 00:14:37.180 ************************************ 00:14:37.180 06:01:28 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:37.180 [2024-07-13 06:01:28.891350] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:37.180 [2024-07-13 06:01:28.891798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87594 ] 00:14:37.436 [2024-07-13 06:01:29.041389] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.436 [2024-07-13 06:01:29.079865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.436 [2024-07-13 06:01:29.079991] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:37.436 [2024-07-13 06:01:29.080028] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:37.436 [2024-07-13 06:01:29.080043] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:37.694 ************************************ 00:14:37.694 END TEST bdev_json_nonenclosed 00:14:37.694 ************************************ 00:14:37.694 00:14:37.694 real 0m0.373s 00:14:37.694 user 0m0.157s 00:14:37.694 sys 0m0.112s 00:14:37.694 06:01:29 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:14:37.694 06:01:29 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:37.694 06:01:29 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:37.694 06:01:29 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 234 00:14:37.694 06:01:29 blockdev_xnvme -- bdev/blockdev.sh@782 -- # true 00:14:37.694 06:01:29 blockdev_xnvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:37.694 06:01:29 blockdev_xnvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:14:37.694 06:01:29 blockdev_xnvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:37.694 06:01:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:37.694 ************************************ 00:14:37.694 START TEST bdev_json_nonarray 00:14:37.694 ************************************ 00:14:37.694 06:01:29 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:37.694 [2024-07-13 06:01:29.314233] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:37.694 [2024-07-13 06:01:29.314439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87620 ] 00:14:37.951 [2024-07-13 06:01:29.462878] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.951 [2024-07-13 06:01:29.498661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.951 [2024-07-13 06:01:29.498784] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:37.951 [2024-07-13 06:01:29.498816] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:37.951 [2024-07-13 06:01:29.498843] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:37.951 00:14:37.951 real 0m0.373s 00:14:37.951 user 0m0.167s 00:14:37.951 sys 0m0.101s 00:14:37.951 06:01:29 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:14:37.951 06:01:29 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:37.951 06:01:29 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:37.951 ************************************ 00:14:37.951 END TEST bdev_json_nonarray 00:14:37.951 ************************************ 00:14:37.951 06:01:29 blockdev_xnvme -- common/autotest_common.sh@1142 -- # return 234 00:14:37.951 06:01:29 blockdev_xnvme -- bdev/blockdev.sh@785 -- # true 00:14:37.951 06:01:29 blockdev_xnvme -- bdev/blockdev.sh@787 -- # [[ xnvme == bdev ]] 00:14:37.951 06:01:29 blockdev_xnvme -- bdev/blockdev.sh@794 -- # [[ xnvme == gpt ]] 00:14:37.951 06:01:29 blockdev_xnvme -- bdev/blockdev.sh@798 -- # [[ xnvme == crypto_sw ]] 00:14:37.951 06:01:29 blockdev_xnvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:14:37.951 06:01:29 blockdev_xnvme -- bdev/blockdev.sh@811 -- # cleanup 00:14:37.951 06:01:29 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:37.951 06:01:29 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:37.951 06:01:29 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:14:37.951 06:01:29 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:14:37.951 06:01:29 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:14:37.951 06:01:29 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:14:37.951 06:01:29 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:38.516 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:41.793 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:42.050 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:42.050 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:42.050 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:42.309 00:14:42.309 real 0m51.896s 00:14:42.309 user 1m30.192s 00:14:42.309 sys 0m32.881s 00:14:42.309 06:01:33 blockdev_xnvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:42.309 06:01:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:42.309 ************************************ 00:14:42.309 END TEST blockdev_xnvme 00:14:42.309 ************************************ 00:14:42.309 06:01:33 -- common/autotest_common.sh@1142 -- # return 0 00:14:42.309 06:01:33 -- spdk/autotest.sh@251 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:42.309 06:01:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:42.309 06:01:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:42.309 06:01:33 -- common/autotest_common.sh@10 -- # set +x 00:14:42.309 ************************************ 00:14:42.309 START TEST ublk 00:14:42.309 ************************************ 00:14:42.309 06:01:33 ublk -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:42.309 * Looking for test storage... 00:14:42.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:42.309 06:01:33 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:42.309 06:01:33 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:42.309 06:01:33 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:42.309 06:01:33 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:42.309 06:01:33 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:42.309 06:01:33 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:42.309 06:01:33 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:42.309 06:01:33 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:42.309 06:01:33 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:42.309 06:01:33 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:14:42.309 06:01:33 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:14:42.309 06:01:33 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:14:42.309 06:01:33 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:14:42.309 06:01:33 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:14:42.309 06:01:33 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:14:42.309 06:01:33 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:14:42.309 06:01:33 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:14:42.309 06:01:33 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:14:42.309 06:01:33 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:14:42.309 06:01:33 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:14:42.309 06:01:33 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:42.309 06:01:33 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:42.309 06:01:33 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:42.309 ************************************ 00:14:42.309 START TEST test_save_ublk_config 00:14:42.309 ************************************ 00:14:42.309 06:01:33 ublk.test_save_ublk_config -- common/autotest_common.sh@1123 -- # test_save_config 00:14:42.309 06:01:33 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:14:42.309 06:01:33 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=87899 00:14:42.309 06:01:33 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:14:42.309 06:01:33 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:14:42.309 06:01:33 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 87899 00:14:42.309 06:01:33 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 87899 ']' 00:14:42.309 06:01:33 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.309 06:01:33 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:42.309 06:01:33 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.309 06:01:33 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:42.309 06:01:33 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:42.567 [2024-07-13 06:01:34.086409] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:42.567 [2024-07-13 06:01:34.086834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87899 ] 00:14:42.567 [2024-07-13 06:01:34.247589] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.826 [2024-07-13 06:01:34.303408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.393 06:01:34 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:43.393 06:01:34 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:14:43.393 06:01:34 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:43.393 06:01:34 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:43.393 06:01:34 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.393 06:01:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:43.393 [2024-07-13 06:01:34.966337] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:43.393 [2024-07-13 06:01:34.966705] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:43.393 malloc0 00:14:43.393 [2024-07-13 06:01:34.990358] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:43.393 [2024-07-13 06:01:34.990469] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:43.393 [2024-07-13 06:01:34.990490] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:43.393 [2024-07-13 06:01:34.990518] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:43.393 [2024-07-13 06:01:34.999363] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:43.393 [2024-07-13 06:01:34.999396] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:43.393 [2024-07-13 06:01:35.006264] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:43.393 [2024-07-13 06:01:35.006408] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:43.393 [2024-07-13 06:01:35.023255] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:43.393 0 00:14:43.393 06:01:35 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.393 06:01:35 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:43.393 06:01:35 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.393 06:01:35 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:43.652 06:01:35 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.652 06:01:35 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:14:43.652 "subsystems": [ 00:14:43.652 { 00:14:43.652 "subsystem": "keyring", 00:14:43.652 "config": [] 00:14:43.652 }, 00:14:43.652 { 00:14:43.652 "subsystem": "iobuf", 00:14:43.652 "config": [ 00:14:43.652 { 00:14:43.652 "method": "iobuf_set_options", 00:14:43.652 "params": { 00:14:43.652 "small_pool_count": 8192, 00:14:43.652 "large_pool_count": 1024, 00:14:43.652 "small_bufsize": 8192, 00:14:43.652 "large_bufsize": 135168 00:14:43.652 } 00:14:43.652 } 00:14:43.652 ] 00:14:43.652 }, 00:14:43.652 { 00:14:43.652 "subsystem": "sock", 00:14:43.652 "config": [ 00:14:43.652 { 00:14:43.652 "method": "sock_set_default_impl", 00:14:43.652 "params": { 00:14:43.652 "impl_name": "posix" 00:14:43.652 } 00:14:43.652 }, 00:14:43.652 { 00:14:43.652 "method": "sock_impl_set_options", 00:14:43.652 "params": { 00:14:43.652 "impl_name": "ssl", 00:14:43.652 "recv_buf_size": 4096, 00:14:43.652 "send_buf_size": 4096, 00:14:43.652 "enable_recv_pipe": true, 00:14:43.652 "enable_quickack": false, 00:14:43.652 "enable_placement_id": 0, 00:14:43.652 "enable_zerocopy_send_server": true, 00:14:43.652 "enable_zerocopy_send_client": false, 00:14:43.652 "zerocopy_threshold": 0, 00:14:43.652 "tls_version": 0, 00:14:43.652 "enable_ktls": false 00:14:43.652 } 00:14:43.652 }, 00:14:43.652 { 00:14:43.652 "method": "sock_impl_set_options", 00:14:43.652 "params": { 00:14:43.652 "impl_name": "posix", 00:14:43.652 "recv_buf_size": 2097152, 00:14:43.652 "send_buf_size": 2097152, 00:14:43.652 "enable_recv_pipe": true, 00:14:43.652 "enable_quickack": false, 00:14:43.652 "enable_placement_id": 0, 00:14:43.652 "enable_zerocopy_send_server": true, 00:14:43.652 "enable_zerocopy_send_client": false, 00:14:43.652 "zerocopy_threshold": 0, 00:14:43.652 "tls_version": 0, 00:14:43.652 "enable_ktls": false 00:14:43.652 } 00:14:43.652 } 00:14:43.652 ] 00:14:43.652 }, 00:14:43.652 { 00:14:43.652 "subsystem": "vmd", 00:14:43.652 "config": [] 00:14:43.652 }, 00:14:43.652 { 00:14:43.652 "subsystem": "accel", 00:14:43.652 "config": [ 00:14:43.652 { 00:14:43.652 "method": "accel_set_options", 00:14:43.652 "params": { 00:14:43.652 "small_cache_size": 128, 00:14:43.652 "large_cache_size": 16, 00:14:43.652 "task_count": 2048, 00:14:43.652 "sequence_count": 2048, 00:14:43.652 "buf_count": 2048 00:14:43.652 } 00:14:43.652 } 00:14:43.652 ] 00:14:43.652 }, 00:14:43.652 { 00:14:43.652 "subsystem": "bdev", 00:14:43.652 "config": [ 00:14:43.652 { 00:14:43.652 "method": "bdev_set_options", 00:14:43.652 "params": { 00:14:43.652 "bdev_io_pool_size": 65535, 00:14:43.652 "bdev_io_cache_size": 256, 00:14:43.652 "bdev_auto_examine": true, 00:14:43.652 "iobuf_small_cache_size": 128, 00:14:43.652 "iobuf_large_cache_size": 16 00:14:43.652 } 00:14:43.652 }, 00:14:43.652 { 00:14:43.652 "method": "bdev_raid_set_options", 00:14:43.652 "params": { 00:14:43.652 "process_window_size_kb": 1024 00:14:43.652 } 00:14:43.652 }, 00:14:43.652 { 00:14:43.652 "method": "bdev_iscsi_set_options", 00:14:43.652 "params": { 00:14:43.652 "timeout_sec": 30 00:14:43.652 } 00:14:43.652 }, 00:14:43.652 { 00:14:43.652 "method": "bdev_nvme_set_options", 00:14:43.652 "params": { 00:14:43.652 "action_on_timeout": "none", 00:14:43.652 "timeout_us": 0, 00:14:43.652 "timeout_admin_us": 0, 00:14:43.652 "keep_alive_timeout_ms": 10000, 00:14:43.652 "arbitration_burst": 0, 00:14:43.652 "low_priority_weight": 0, 00:14:43.652 "medium_priority_weight": 0, 00:14:43.652 "high_priority_weight": 0, 00:14:43.652 "nvme_adminq_poll_period_us": 10000, 00:14:43.652 "nvme_ioq_poll_period_us": 0, 00:14:43.652 "io_queue_requests": 0, 00:14:43.652 "delay_cmd_submit": true, 00:14:43.652 "transport_retry_count": 4, 00:14:43.652 "bdev_retry_count": 3, 00:14:43.652 "transport_ack_timeout": 0, 00:14:43.652 "ctrlr_loss_timeout_sec": 0, 00:14:43.652 "reconnect_delay_sec": 0, 00:14:43.652 "fast_io_fail_timeout_sec": 0, 00:14:43.652 "disable_auto_failback": false, 00:14:43.652 "generate_uuids": false, 00:14:43.652 "transport_tos": 0, 00:14:43.652 "nvme_error_stat": false, 00:14:43.652 "rdma_srq_size": 0, 00:14:43.652 "io_path_stat": false, 00:14:43.652 "allow_accel_sequence": false, 00:14:43.652 "rdma_max_cq_size": 0, 00:14:43.652 "rdma_cm_event_timeout_ms": 0, 00:14:43.652 "dhchap_digests": [ 00:14:43.652 "sha256", 00:14:43.652 "sha384", 00:14:43.652 "sha512" 00:14:43.652 ], 00:14:43.652 "dhchap_dhgroups": [ 00:14:43.652 "null", 00:14:43.652 "ffdhe2048", 00:14:43.652 "ffdhe3072", 00:14:43.652 "ffdhe4096", 00:14:43.652 "ffdhe6144", 00:14:43.652 "ffdhe8192" 00:14:43.652 ] 00:14:43.652 } 00:14:43.652 }, 00:14:43.652 { 00:14:43.652 "method": "bdev_nvme_set_hotplug", 00:14:43.652 "params": { 00:14:43.652 "period_us": 100000, 00:14:43.652 "enable": false 00:14:43.652 } 00:14:43.652 }, 00:14:43.652 { 00:14:43.652 "method": "bdev_malloc_create", 00:14:43.652 "params": { 00:14:43.652 "name": "malloc0", 00:14:43.652 "num_blocks": 8192, 00:14:43.652 "block_size": 4096, 00:14:43.652 "physical_block_size": 4096, 00:14:43.652 "uuid": "f0880210-93b7-4b46-9972-ada849ea4d7e", 00:14:43.652 "optimal_io_boundary": 0 00:14:43.652 } 00:14:43.652 }, 00:14:43.652 { 00:14:43.652 "method": "bdev_wait_for_examine" 00:14:43.652 } 00:14:43.652 ] 00:14:43.652 }, 00:14:43.652 { 00:14:43.652 "subsystem": "scsi", 00:14:43.652 "config": null 00:14:43.652 }, 00:14:43.652 { 00:14:43.652 "subsystem": "scheduler", 00:14:43.652 "config": [ 00:14:43.652 { 00:14:43.652 "method": "framework_set_scheduler", 00:14:43.652 "params": { 00:14:43.652 "name": "static" 00:14:43.652 } 00:14:43.652 } 00:14:43.652 ] 00:14:43.652 }, 00:14:43.652 { 00:14:43.652 "subsystem": "vhost_scsi", 00:14:43.652 "config": [] 00:14:43.652 }, 00:14:43.652 { 00:14:43.652 "subsystem": "vhost_blk", 00:14:43.652 "config": [] 00:14:43.652 }, 00:14:43.652 { 00:14:43.652 "subsystem": "ublk", 00:14:43.652 "config": [ 00:14:43.652 { 00:14:43.652 "method": "ublk_create_target", 00:14:43.652 "params": { 00:14:43.652 "cpumask": "1" 00:14:43.652 } 00:14:43.652 }, 00:14:43.652 { 00:14:43.652 "method": "ublk_start_disk", 00:14:43.652 "params": { 00:14:43.652 "bdev_name": "malloc0", 00:14:43.652 "ublk_id": 0, 00:14:43.652 "num_queues": 1, 00:14:43.652 "queue_depth": 128 00:14:43.652 } 00:14:43.652 } 00:14:43.652 ] 00:14:43.652 }, 00:14:43.652 { 00:14:43.652 "subsystem": "nbd", 00:14:43.652 "config": [] 00:14:43.652 }, 00:14:43.652 { 00:14:43.652 "subsystem": "nvmf", 00:14:43.652 "config": [ 00:14:43.652 { 00:14:43.652 "method": "nvmf_set_config", 00:14:43.652 "params": { 00:14:43.652 "discovery_filter": "match_any", 00:14:43.652 "admin_cmd_passthru": { 00:14:43.652 "identify_ctrlr": false 00:14:43.652 } 00:14:43.652 } 00:14:43.652 }, 00:14:43.652 { 00:14:43.652 "method": "nvmf_set_max_subsystems", 00:14:43.652 "params": { 00:14:43.652 "max_subsystems": 1024 00:14:43.652 } 00:14:43.652 }, 00:14:43.652 { 00:14:43.652 "method": "nvmf_set_crdt", 00:14:43.652 "params": { 00:14:43.652 "crdt1": 0, 00:14:43.652 "crdt2": 0, 00:14:43.652 "crdt3": 0 00:14:43.652 } 00:14:43.652 } 00:14:43.652 ] 00:14:43.652 }, 00:14:43.652 { 00:14:43.652 "subsystem": "iscsi", 00:14:43.652 "config": [ 00:14:43.652 { 00:14:43.652 "method": "iscsi_set_options", 00:14:43.652 "params": { 00:14:43.652 "node_base": "iqn.2016-06.io.spdk", 00:14:43.652 "max_sessions": 128, 00:14:43.652 "max_connections_per_session": 2, 00:14:43.652 "max_queue_depth": 64, 00:14:43.652 "default_time2wait": 2, 00:14:43.652 "default_time2retain": 20, 00:14:43.652 "first_burst_length": 8192, 00:14:43.652 "immediate_data": true, 00:14:43.652 "allow_duplicated_isid": false, 00:14:43.652 "error_recovery_level": 0, 00:14:43.652 "nop_timeout": 60, 00:14:43.652 "nop_in_interval": 30, 00:14:43.652 "disable_chap": false, 00:14:43.652 "require_chap": false, 00:14:43.652 "mutual_chap": false, 00:14:43.652 "chap_group": 0, 00:14:43.652 "max_large_datain_per_connection": 64, 00:14:43.652 "max_r2t_per_connection": 4, 00:14:43.652 "pdu_pool_size": 36864, 00:14:43.652 "immediate_data_pool_size": 16384, 00:14:43.652 "data_out_pool_size": 2048 00:14:43.652 } 00:14:43.652 } 00:14:43.652 ] 00:14:43.652 } 00:14:43.652 ] 00:14:43.653 }' 00:14:43.653 06:01:35 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 87899 00:14:43.653 06:01:35 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 87899 ']' 00:14:43.653 06:01:35 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 87899 00:14:43.653 06:01:35 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:14:43.653 06:01:35 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:43.653 06:01:35 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87899 00:14:43.653 killing process with pid 87899 00:14:43.653 06:01:35 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:43.653 06:01:35 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:43.653 06:01:35 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87899' 00:14:43.653 06:01:35 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 87899 00:14:43.653 06:01:35 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 87899 00:14:43.911 [2024-07-13 06:01:35.485821] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:43.911 [2024-07-13 06:01:35.518326] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:43.911 [2024-07-13 06:01:35.518521] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:43.911 [2024-07-13 06:01:35.527313] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:43.911 [2024-07-13 06:01:35.527393] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:43.911 [2024-07-13 06:01:35.527411] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:43.911 [2024-07-13 06:01:35.527445] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:14:43.911 [2024-07-13 06:01:35.527669] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:14:44.170 06:01:35 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=87933 00:14:44.170 06:01:35 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 87933 00:14:44.170 06:01:35 ublk.test_save_ublk_config -- common/autotest_common.sh@829 -- # '[' -z 87933 ']' 00:14:44.170 06:01:35 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.170 06:01:35 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:44.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.170 06:01:35 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:44.170 06:01:35 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.170 06:01:35 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:44.170 06:01:35 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:14:44.170 "subsystems": [ 00:14:44.170 { 00:14:44.170 "subsystem": "keyring", 00:14:44.170 "config": [] 00:14:44.170 }, 00:14:44.170 { 00:14:44.170 "subsystem": "iobuf", 00:14:44.170 "config": [ 00:14:44.170 { 00:14:44.170 "method": "iobuf_set_options", 00:14:44.170 "params": { 00:14:44.170 "small_pool_count": 8192, 00:14:44.170 "large_pool_count": 1024, 00:14:44.170 "small_bufsize": 8192, 00:14:44.170 "large_bufsize": 135168 00:14:44.170 } 00:14:44.170 } 00:14:44.170 ] 00:14:44.170 }, 00:14:44.170 { 00:14:44.170 "subsystem": "sock", 00:14:44.170 "config": [ 00:14:44.170 { 00:14:44.170 "method": "sock_set_default_impl", 00:14:44.170 "params": { 00:14:44.170 "impl_name": "posix" 00:14:44.170 } 00:14:44.170 }, 00:14:44.170 { 00:14:44.170 "method": "sock_impl_set_options", 00:14:44.170 "params": { 00:14:44.170 "impl_name": "ssl", 00:14:44.170 "recv_buf_size": 4096, 00:14:44.170 "send_buf_size": 4096, 00:14:44.170 "enable_recv_pipe": true, 00:14:44.170 "enable_quickack": false, 00:14:44.170 "enable_placement_id": 0, 00:14:44.170 "enable_zerocopy_send_server": true, 00:14:44.170 "enable_zerocopy_send_client": false, 00:14:44.170 "zerocopy_threshold": 0, 00:14:44.170 "tls_version": 0, 00:14:44.170 "enable_ktls": false 00:14:44.170 } 00:14:44.170 }, 00:14:44.170 { 00:14:44.170 "method": "sock_impl_set_options", 00:14:44.170 "params": { 00:14:44.170 "impl_name": "posix", 00:14:44.170 "recv_buf_size": 2097152, 00:14:44.170 "send_buf_size": 2097152, 00:14:44.170 "enable_recv_pipe": true, 00:14:44.170 "enable_quickack": false, 00:14:44.170 "enable_placement_id": 0, 00:14:44.170 "enable_zerocopy_send_server": true, 00:14:44.170 "enable_zerocopy_send_client": false, 00:14:44.170 "zerocopy_threshold": 0, 00:14:44.170 "tls_version": 0, 00:14:44.170 "enable_ktls": false 00:14:44.170 } 00:14:44.170 } 00:14:44.170 ] 00:14:44.170 }, 00:14:44.170 { 00:14:44.170 "subsystem": "vmd", 00:14:44.170 "config": [] 00:14:44.170 }, 00:14:44.170 { 00:14:44.170 "subsystem": "accel", 00:14:44.170 "config": [ 00:14:44.170 { 00:14:44.170 "method": "accel_set_options", 00:14:44.170 "params": { 00:14:44.170 "small_cache_size": 128, 00:14:44.170 "large_cache_size": 16, 00:14:44.170 "task_count": 2048, 00:14:44.170 "sequence_count": 2048, 00:14:44.170 "buf_count": 2048 00:14:44.170 } 00:14:44.170 } 00:14:44.170 ] 00:14:44.170 }, 00:14:44.170 { 00:14:44.170 "subsystem": "bdev", 00:14:44.170 "config": [ 00:14:44.170 { 00:14:44.170 "method": "bdev_set_options", 00:14:44.170 "params": { 00:14:44.170 "bdev_io_pool_size": 65535, 00:14:44.170 "bdev_io_cache_size": 256, 00:14:44.170 "bdev_auto_examine": true, 00:14:44.170 "iobuf_small_cache_size": 128, 00:14:44.170 "iobuf_large_cache_size": 16 00:14:44.170 } 00:14:44.170 }, 00:14:44.170 { 00:14:44.170 "method": "bdev_raid_set_options", 00:14:44.170 "params": { 00:14:44.170 "process_window_size_kb": 1024 00:14:44.170 } 00:14:44.170 }, 00:14:44.170 { 00:14:44.170 "method": "bdev_iscsi_set_options", 00:14:44.170 "params": { 00:14:44.170 "timeout_sec": 30 00:14:44.170 } 00:14:44.170 }, 00:14:44.170 { 00:14:44.170 "method": "bdev_nvme_set_options", 00:14:44.170 "params": { 00:14:44.170 "action_on_timeout": "none", 00:14:44.170 "timeout_us": 0, 00:14:44.170 "timeout_admin_us": 0, 00:14:44.170 "keep_alive_timeout_ms": 10000, 00:14:44.170 "arbitration_burst": 0, 00:14:44.170 "low_priority_weight": 0, 00:14:44.170 "medium_priority_weight": 0, 00:14:44.170 "high_priority_weight": 0, 00:14:44.170 "nvme_adminq_poll_period_us": 10000, 00:14:44.170 "nvme_ioq_poll_period_us": 0, 00:14:44.170 "io_queue_requests": 0, 00:14:44.170 "delay_cmd_submit": true, 00:14:44.170 "transport_retry_count": 4, 00:14:44.170 "bdev_retry_count": 3, 00:14:44.170 "transport_ack_timeout": 0, 00:14:44.170 "ctrlr_loss_timeout_sec": 0, 00:14:44.170 "reconnect_delay_sec": 0, 00:14:44.170 "fast_io_fail_timeout_sec": 0, 00:14:44.170 "disable_auto_failback": false, 00:14:44.170 "generate_uuids": false, 00:14:44.170 "transport_tos": 0, 00:14:44.170 "nvme_error_stat": false, 00:14:44.170 "rdma_srq_size": 0, 00:14:44.170 "io_path_stat": false, 00:14:44.170 "allow_accel_sequence": false, 00:14:44.170 "rdma_max_cq_size": 0, 00:14:44.170 "rdma_cm_event_timeout_ms": 0, 00:14:44.170 "dhchap_digests": [ 00:14:44.170 "sha256", 00:14:44.170 "sha384", 00:14:44.170 "sha512" 00:14:44.170 ], 00:14:44.170 "dhchap_dhgroups": [ 00:14:44.170 "null", 00:14:44.170 "ffdhe2048", 00:14:44.170 "ffdhe3072", 00:14:44.170 "ffdhe4096", 00:14:44.170 "ffdhe6144", 00:14:44.170 "ffdhe8192" 00:14:44.170 ] 00:14:44.170 } 00:14:44.170 }, 00:14:44.170 { 00:14:44.170 "method": "bdev_nvme_set_hotplug", 00:14:44.170 "params": { 00:14:44.170 "period_us": 100000, 00:14:44.170 "enable": false 00:14:44.170 } 00:14:44.170 }, 00:14:44.170 { 00:14:44.170 "method": "bdev_malloc_create", 00:14:44.170 "params": { 00:14:44.170 "name": "malloc0", 00:14:44.170 "num_blocks": 8192, 00:14:44.170 "block_size": 4096, 00:14:44.170 "physical_block_size": 4096, 00:14:44.170 "uuid": "f0880210-93b7-4b46-9972-ada849ea4d7e", 00:14:44.170 "optimal_io_boundary": 0 00:14:44.170 } 00:14:44.170 }, 00:14:44.170 { 00:14:44.170 "method": "bdev_wait_for_examine" 00:14:44.170 } 00:14:44.170 ] 00:14:44.170 }, 00:14:44.170 { 00:14:44.170 "subsystem": "scsi", 00:14:44.170 "config": null 00:14:44.170 }, 00:14:44.170 { 00:14:44.170 "subsystem": "scheduler", 00:14:44.170 "config": [ 00:14:44.170 { 00:14:44.170 "method": "framework_set_scheduler", 00:14:44.170 "params": { 00:14:44.170 "name": "static" 00:14:44.170 } 00:14:44.170 } 00:14:44.170 ] 00:14:44.170 }, 00:14:44.170 { 00:14:44.170 "subsystem": "vhost_scsi", 00:14:44.170 "config": [] 00:14:44.170 }, 00:14:44.170 { 00:14:44.170 "subsystem": "vhost_blk", 00:14:44.170 "config": [] 00:14:44.170 }, 00:14:44.170 { 00:14:44.170 "subsystem": "ublk", 00:14:44.170 "config": [ 00:14:44.170 { 00:14:44.170 "method": "ublk_create_target", 00:14:44.170 "params": { 00:14:44.170 "cpumask": "1" 00:14:44.170 } 00:14:44.170 }, 00:14:44.170 { 00:14:44.170 "method": "ublk_start_disk", 00:14:44.170 "params": { 00:14:44.170 "bdev_name": "malloc0", 00:14:44.170 "ublk_id": 0, 00:14:44.170 "num_queues": 1, 00:14:44.170 "queue_depth": 128 00:14:44.170 } 00:14:44.170 } 00:14:44.170 ] 00:14:44.170 }, 00:14:44.170 { 00:14:44.170 "subsystem": "nbd", 00:14:44.170 "config": [] 00:14:44.170 }, 00:14:44.170 { 00:14:44.170 "subsystem": "nvmf", 00:14:44.170 "config": [ 00:14:44.170 { 00:14:44.170 "method": "nvmf_set_config", 00:14:44.170 "params": { 00:14:44.170 "discovery_filter": "match_any", 00:14:44.170 "admin_cmd_passthru": { 00:14:44.170 "identify_ctrlr": false 00:14:44.170 } 00:14:44.170 } 00:14:44.170 }, 00:14:44.170 { 00:14:44.170 "method": "nvmf_set_max_subsystems", 00:14:44.170 "params": { 00:14:44.170 "max_subsystems": 1024 00:14:44.170 } 00:14:44.170 }, 00:14:44.170 { 00:14:44.170 "method": "nvmf_set_crdt", 00:14:44.170 "params": { 00:14:44.170 "crdt1": 0, 00:14:44.170 "crdt2": 0, 00:14:44.170 "crdt3": 0 00:14:44.170 } 00:14:44.170 } 00:14:44.170 ] 00:14:44.170 }, 00:14:44.170 { 00:14:44.170 "subsystem": "iscsi", 00:14:44.170 "config": [ 00:14:44.170 { 00:14:44.170 "method": "iscsi_set_options", 00:14:44.170 "params": { 00:14:44.170 "node_base": "iqn.2016-06.io.spdk", 00:14:44.170 "max_sessions": 128, 00:14:44.170 "max_connections_per_session": 2, 00:14:44.170 "max_queue_depth": 64, 00:14:44.170 "default_time2wait": 2, 00:14:44.170 "default_time2retain": 20, 00:14:44.170 "first_burst_length": 8192, 00:14:44.171 "immediate_data": true, 00:14:44.171 "allow_duplicated_isid": false, 00:14:44.171 "error_recovery_level": 0, 00:14:44.171 "nop_timeout": 60, 00:14:44.171 "nop_in_interval": 30, 00:14:44.171 "disable_chap": false, 00:14:44.171 "require_chap": false, 00:14:44.171 "mutual_chap": false, 00:14:44.171 "chap_group": 0, 00:14:44.171 "max_large_datain_per_connection": 64, 00:14:44.171 "max_r2t_per_connection": 4, 00:14:44.171 "pdu_pool_size": 36864, 00:14:44.171 "immediate_data_pool_size": 16384, 00:14:44.171 "data_out_pool_size": 2048 00:14:44.171 } 00:14:44.171 } 00:14:44.171 ] 00:14:44.171 } 00:14:44.171 ] 00:14:44.171 }' 00:14:44.171 06:01:35 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:44.171 [2024-07-13 06:01:35.821028] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:44.171 [2024-07-13 06:01:35.821253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87933 ] 00:14:44.440 [2024-07-13 06:01:35.965903] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.440 [2024-07-13 06:01:36.019278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.715 [2024-07-13 06:01:36.305231] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:44.715 [2024-07-13 06:01:36.305532] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:44.715 [2024-07-13 06:01:36.313349] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:44.715 [2024-07-13 06:01:36.313465] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:44.715 [2024-07-13 06:01:36.313483] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:44.715 [2024-07-13 06:01:36.313493] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:44.715 [2024-07-13 06:01:36.322318] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:44.715 [2024-07-13 06:01:36.322341] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:44.715 [2024-07-13 06:01:36.329242] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:44.715 [2024-07-13 06:01:36.329357] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:44.715 [2024-07-13 06:01:36.346212] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:45.281 06:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:45.281 06:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # return 0 00:14:45.281 06:01:36 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:45.281 06:01:36 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:45.281 06:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.281 06:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:45.281 06:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.281 06:01:36 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:45.281 06:01:36 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:45.281 06:01:36 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 87933 00:14:45.281 06:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@948 -- # '[' -z 87933 ']' 00:14:45.281 06:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # kill -0 87933 00:14:45.281 06:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # uname 00:14:45.281 06:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:45.281 06:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87933 00:14:45.281 06:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:45.281 killing process with pid 87933 00:14:45.281 06:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:45.281 06:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87933' 00:14:45.281 06:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@967 -- # kill 87933 00:14:45.281 06:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # wait 87933 00:14:45.281 [2024-07-13 06:01:36.996416] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:45.538 [2024-07-13 06:01:37.032276] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:45.538 [2024-07-13 06:01:37.032450] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:45.538 [2024-07-13 06:01:37.040282] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:45.538 [2024-07-13 06:01:37.040374] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:45.538 [2024-07-13 06:01:37.040389] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:45.538 [2024-07-13 06:01:37.040422] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:14:45.538 [2024-07-13 06:01:37.040624] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:14:45.538 06:01:37 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:45.538 ************************************ 00:14:45.538 END TEST test_save_ublk_config 00:14:45.538 ************************************ 00:14:45.538 00:14:45.538 real 0m3.283s 00:14:45.538 user 0m2.765s 00:14:45.538 sys 0m1.338s 00:14:45.538 06:01:37 ublk.test_save_ublk_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:45.538 06:01:37 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:45.796 06:01:37 ublk -- common/autotest_common.sh@1142 -- # return 0 00:14:45.796 06:01:37 ublk -- ublk/ublk.sh@139 -- # spdk_pid=87983 00:14:45.796 06:01:37 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:45.796 06:01:37 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:45.796 06:01:37 ublk -- ublk/ublk.sh@141 -- # waitforlisten 87983 00:14:45.796 06:01:37 ublk -- common/autotest_common.sh@829 -- # '[' -z 87983 ']' 00:14:45.796 06:01:37 ublk -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.796 06:01:37 ublk -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:45.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.796 06:01:37 ublk -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.796 06:01:37 ublk -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:45.796 06:01:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:45.796 [2024-07-13 06:01:37.408823] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:45.796 [2024-07-13 06:01:37.408992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87983 ] 00:14:46.054 [2024-07-13 06:01:37.557465] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:46.054 [2024-07-13 06:01:37.597419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.054 [2024-07-13 06:01:37.597470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.622 06:01:38 ublk -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:46.622 06:01:38 ublk -- common/autotest_common.sh@862 -- # return 0 00:14:46.622 06:01:38 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:46.622 06:01:38 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:46.622 06:01:38 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:46.622 06:01:38 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:46.622 ************************************ 00:14:46.622 START TEST test_create_ublk 00:14:46.622 ************************************ 00:14:46.622 06:01:38 ublk.test_create_ublk -- common/autotest_common.sh@1123 -- # test_create_ublk 00:14:46.622 06:01:38 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:46.622 06:01:38 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.622 06:01:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:46.622 [2024-07-13 06:01:38.300252] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:46.622 [2024-07-13 06:01:38.301560] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:46.622 06:01:38 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.622 06:01:38 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:46.622 06:01:38 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:46.622 06:01:38 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.622 06:01:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:46.622 06:01:38 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.880 06:01:38 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:46.880 06:01:38 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:46.880 06:01:38 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.880 06:01:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:46.880 [2024-07-13 06:01:38.359364] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:46.880 [2024-07-13 06:01:38.359992] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:46.880 [2024-07-13 06:01:38.360037] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:46.880 [2024-07-13 06:01:38.360049] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:46.880 [2024-07-13 06:01:38.370275] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:46.880 [2024-07-13 06:01:38.370332] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:46.880 [2024-07-13 06:01:38.378283] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:46.880 [2024-07-13 06:01:38.384264] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:46.880 [2024-07-13 06:01:38.398247] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:46.880 06:01:38 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.880 06:01:38 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:46.880 06:01:38 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:46.880 06:01:38 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:46.880 06:01:38 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.880 06:01:38 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:46.880 06:01:38 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.880 06:01:38 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:46.880 { 00:14:46.880 "ublk_device": "/dev/ublkb0", 00:14:46.880 "id": 0, 00:14:46.880 "queue_depth": 512, 00:14:46.880 "num_queues": 4, 00:14:46.880 "bdev_name": "Malloc0" 00:14:46.880 } 00:14:46.880 ]' 00:14:46.880 06:01:38 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:46.880 06:01:38 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:46.880 06:01:38 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:46.880 06:01:38 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:46.880 06:01:38 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:46.880 06:01:38 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:46.880 06:01:38 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:47.138 06:01:38 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:47.138 06:01:38 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:47.138 06:01:38 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:47.138 06:01:38 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:47.138 06:01:38 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:47.138 06:01:38 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:47.138 06:01:38 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:47.138 06:01:38 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:47.138 06:01:38 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:47.138 06:01:38 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:47.138 06:01:38 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:47.138 06:01:38 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:47.138 06:01:38 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:47.138 06:01:38 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:47.138 06:01:38 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:47.138 fio: verification read phase will never start because write phase uses all of runtime 00:14:47.138 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:47.138 fio-3.35 00:14:47.138 Starting 1 process 00:14:59.342 00:14:59.342 fio_test: (groupid=0, jobs=1): err= 0: pid=88028: Sat Jul 13 06:01:48 2024 00:14:59.342 write: IOPS=8713, BW=34.0MiB/s (35.7MB/s)(340MiB/10001msec); 0 zone resets 00:14:59.342 clat (usec): min=65, max=8666, avg=113.45, stdev=298.68 00:14:59.342 lat (usec): min=66, max=8688, avg=114.14, stdev=298.73 00:14:59.342 clat percentiles (usec): 00:14:59.342 | 1.00th=[ 73], 5.00th=[ 75], 10.00th=[ 76], 20.00th=[ 77], 00:14:59.342 | 30.00th=[ 78], 40.00th=[ 80], 50.00th=[ 82], 60.00th=[ 85], 00:14:59.342 | 70.00th=[ 91], 80.00th=[ 97], 90.00th=[ 110], 95.00th=[ 123], 00:14:59.342 | 99.00th=[ 172], 99.50th=[ 3261], 99.90th=[ 3982], 99.95th=[ 4113], 00:14:59.342 | 99.99th=[ 7898] 00:14:59.342 bw ( KiB/s): min=18168, max=43520, per=98.97%, avg=34495.16, stdev=11121.99, samples=19 00:14:59.342 iops : min= 4542, max=10880, avg=8623.79, stdev=2780.50, samples=19 00:14:59.342 lat (usec) : 100=83.61%, 250=15.43%, 500=0.01%, 750=0.02%, 1000=0.08% 00:14:59.342 lat (msec) : 2=0.17%, 4=0.59%, 10=0.09% 00:14:59.342 cpu : usr=2.31%, sys=6.06%, ctx=87144, majf=0, minf=796 00:14:59.342 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:59.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.342 issued rwts: total=0,87143,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:59.342 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:59.342 00:14:59.342 Run status group 0 (all jobs): 00:14:59.342 WRITE: bw=34.0MiB/s (35.7MB/s), 34.0MiB/s-34.0MiB/s (35.7MB/s-35.7MB/s), io=340MiB (357MB), run=10001-10001msec 00:14:59.342 00:14:59.342 Disk stats (read/write): 00:14:59.342 ublkb0: ios=0/86059, merge=0/0, ticks=0/9110, in_queue=9110, util=99.10% 00:14:59.342 06:01:48 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:59.342 06:01:48 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.342 06:01:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.342 [2024-07-13 06:01:48.924320] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:59.342 [2024-07-13 06:01:48.962250] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:59.342 [2024-07-13 06:01:48.967208] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:59.342 [2024-07-13 06:01:48.978265] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:59.342 [2024-07-13 06:01:48.978827] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:59.342 [2024-07-13 06:01:48.978968] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:59.342 06:01:48 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.342 06:01:48 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:59.342 06:01:48 ublk.test_create_ublk -- common/autotest_common.sh@648 -- # local es=0 00:14:59.342 06:01:48 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:59.342 06:01:48 ublk.test_create_ublk -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:14:59.342 06:01:48 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:59.342 06:01:48 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:14:59.342 06:01:48 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:59.342 06:01:48 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # rpc_cmd ublk_stop_disk 0 00:14:59.342 06:01:48 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.342 06:01:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.342 [2024-07-13 06:01:48.992378] ublk.c:1071:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:59.342 request: 00:14:59.342 { 00:14:59.342 "ublk_id": 0, 00:14:59.342 "method": "ublk_stop_disk", 00:14:59.342 "req_id": 1 00:14:59.342 } 00:14:59.342 Got JSON-RPC error response 00:14:59.342 response: 00:14:59.342 { 00:14:59.342 "code": -19, 00:14:59.342 "message": "No such device" 00:14:59.342 } 00:14:59.342 06:01:48 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:14:59.342 06:01:48 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # es=1 00:14:59.342 06:01:48 ublk.test_create_ublk -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:59.342 06:01:48 ublk.test_create_ublk -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:59.342 06:01:48 ublk.test_create_ublk -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:59.342 06:01:48 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:59.342 06:01:48 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.342 06:01:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.342 [2024-07-13 06:01:49.000364] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:14:59.342 [2024-07-13 06:01:49.001736] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:14:59.343 [2024-07-13 06:01:49.001796] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:59.343 06:01:49 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.343 06:01:49 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:59.343 06:01:49 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.343 06:01:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.343 06:01:49 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.343 06:01:49 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:59.343 06:01:49 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:59.343 06:01:49 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.343 06:01:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.343 06:01:49 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.343 06:01:49 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:59.343 06:01:49 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:59.343 06:01:49 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:59.343 06:01:49 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:59.343 06:01:49 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.343 06:01:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.343 06:01:49 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.343 06:01:49 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:59.343 06:01:49 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:59.343 ************************************ 00:14:59.343 END TEST test_create_ublk 00:14:59.343 ************************************ 00:14:59.343 06:01:49 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:59.343 00:14:59.343 real 0m10.874s 00:14:59.343 user 0m0.652s 00:14:59.343 sys 0m0.721s 00:14:59.343 06:01:49 ublk.test_create_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:59.343 06:01:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.343 06:01:49 ublk -- common/autotest_common.sh@1142 -- # return 0 00:14:59.343 06:01:49 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:59.343 06:01:49 ublk -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:59.343 06:01:49 ublk -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:59.343 06:01:49 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.343 ************************************ 00:14:59.343 START TEST test_create_multi_ublk 00:14:59.343 ************************************ 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@1123 -- # test_create_multi_ublk 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.343 [2024-07-13 06:01:49.226300] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:59.343 [2024-07-13 06:01:49.227476] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.343 [2024-07-13 06:01:49.288387] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:59.343 [2024-07-13 06:01:49.288976] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:59.343 [2024-07-13 06:01:49.289000] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:59.343 [2024-07-13 06:01:49.289013] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:59.343 [2024-07-13 06:01:49.295340] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:59.343 [2024-07-13 06:01:49.295375] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:59.343 [2024-07-13 06:01:49.303245] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:59.343 [2024-07-13 06:01:49.304034] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:59.343 [2024-07-13 06:01:49.325160] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.343 [2024-07-13 06:01:49.378397] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:59.343 [2024-07-13 06:01:49.378968] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:59.343 [2024-07-13 06:01:49.378995] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:59.343 [2024-07-13 06:01:49.379006] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:59.343 [2024-07-13 06:01:49.387491] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:59.343 [2024-07-13 06:01:49.387520] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:59.343 [2024-07-13 06:01:49.394230] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:59.343 [2024-07-13 06:01:49.394994] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:59.343 [2024-07-13 06:01:49.403232] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.343 [2024-07-13 06:01:49.467395] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:59.343 [2024-07-13 06:01:49.467975] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:59.343 [2024-07-13 06:01:49.467999] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:59.343 [2024-07-13 06:01:49.468012] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:59.343 [2024-07-13 06:01:49.478227] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:59.343 [2024-07-13 06:01:49.478262] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:59.343 [2024-07-13 06:01:49.486250] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:59.343 [2024-07-13 06:01:49.487029] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:59.343 [2024-07-13 06:01:49.503246] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.343 [2024-07-13 06:01:49.566406] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:59.343 [2024-07-13 06:01:49.566944] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:59.343 [2024-07-13 06:01:49.566972] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:59.343 [2024-07-13 06:01:49.566983] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:59.343 [2024-07-13 06:01:49.574313] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:59.343 [2024-07-13 06:01:49.574355] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:59.343 [2024-07-13 06:01:49.582273] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:59.343 [2024-07-13 06:01:49.583054] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:59.343 [2024-07-13 06:01:49.586028] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.343 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:59.344 { 00:14:59.344 "ublk_device": "/dev/ublkb0", 00:14:59.344 "id": 0, 00:14:59.344 "queue_depth": 512, 00:14:59.344 "num_queues": 4, 00:14:59.344 "bdev_name": "Malloc0" 00:14:59.344 }, 00:14:59.344 { 00:14:59.344 "ublk_device": "/dev/ublkb1", 00:14:59.344 "id": 1, 00:14:59.344 "queue_depth": 512, 00:14:59.344 "num_queues": 4, 00:14:59.344 "bdev_name": "Malloc1" 00:14:59.344 }, 00:14:59.344 { 00:14:59.344 "ublk_device": "/dev/ublkb2", 00:14:59.344 "id": 2, 00:14:59.344 "queue_depth": 512, 00:14:59.344 "num_queues": 4, 00:14:59.344 "bdev_name": "Malloc2" 00:14:59.344 }, 00:14:59.344 { 00:14:59.344 "ublk_device": "/dev/ublkb3", 00:14:59.344 "id": 3, 00:14:59.344 "queue_depth": 512, 00:14:59.344 "num_queues": 4, 00:14:59.344 "bdev_name": "Malloc3" 00:14:59.344 } 00:14:59.344 ]' 00:14:59.344 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:59.344 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:59.344 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:59.344 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:59.344 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:59.344 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:59.344 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:59.344 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:59.344 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:59.344 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:59.344 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:59.344 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:59.344 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:59.344 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:59.344 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:59.344 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:59.344 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:59.344 06:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.344 [2024-07-13 06:01:50.678585] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:59.344 [2024-07-13 06:01:50.710551] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:59.344 [2024-07-13 06:01:50.712178] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:59.344 [2024-07-13 06:01:50.718265] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:59.344 [2024-07-13 06:01:50.718729] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:59.344 [2024-07-13 06:01:50.718772] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.344 [2024-07-13 06:01:50.730328] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:59.344 [2024-07-13 06:01:50.762237] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:59.344 [2024-07-13 06:01:50.763521] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:59.344 [2024-07-13 06:01:50.770344] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:59.344 [2024-07-13 06:01:50.770731] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:59.344 [2024-07-13 06:01:50.770767] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.344 [2024-07-13 06:01:50.786324] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:59.344 [2024-07-13 06:01:50.814296] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:59.344 [2024-07-13 06:01:50.819421] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:59.344 [2024-07-13 06:01:50.828278] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:59.344 [2024-07-13 06:01:50.828699] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:59.344 [2024-07-13 06:01:50.828741] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.344 [2024-07-13 06:01:50.838375] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:59.344 [2024-07-13 06:01:50.875360] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:59.344 [2024-07-13 06:01:50.879636] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:59.344 [2024-07-13 06:01:50.887334] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:59.344 [2024-07-13 06:01:50.887740] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:59.344 [2024-07-13 06:01:50.887785] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.344 06:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:59.603 [2024-07-13 06:01:51.149313] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:14:59.603 [2024-07-13 06:01:51.150559] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:14:59.603 [2024-07-13 06:01:51.150617] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:59.603 06:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:59.603 06:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:59.603 06:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:59.603 06:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.603 06:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.603 06:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.603 06:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:59.603 06:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:59.603 06:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.603 06:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.603 06:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.603 06:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:59.603 06:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:59.603 06:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.603 06:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.603 06:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.603 06:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:59.603 06:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:59.603 06:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.603 06:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.603 06:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.603 06:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:59.603 06:01:51 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:59.603 06:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.603 06:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.861 06:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.861 06:01:51 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:59.861 06:01:51 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:59.861 06:01:51 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:59.861 06:01:51 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:59.861 06:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.861 06:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.861 06:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.861 06:01:51 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:59.861 06:01:51 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:59.861 06:01:51 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:59.861 00:14:59.862 real 0m2.237s 00:14:59.862 user 0m1.322s 00:14:59.862 sys 0m0.182s 00:14:59.862 06:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:59.862 06:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.862 ************************************ 00:14:59.862 END TEST test_create_multi_ublk 00:14:59.862 ************************************ 00:14:59.862 06:01:51 ublk -- common/autotest_common.sh@1142 -- # return 0 00:14:59.862 06:01:51 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:59.862 06:01:51 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:59.862 06:01:51 ublk -- ublk/ublk.sh@130 -- # killprocess 87983 00:14:59.862 06:01:51 ublk -- common/autotest_common.sh@948 -- # '[' -z 87983 ']' 00:14:59.862 06:01:51 ublk -- common/autotest_common.sh@952 -- # kill -0 87983 00:14:59.862 06:01:51 ublk -- common/autotest_common.sh@953 -- # uname 00:14:59.862 06:01:51 ublk -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:59.862 06:01:51 ublk -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87983 00:14:59.862 killing process with pid 87983 00:14:59.862 06:01:51 ublk -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:59.862 06:01:51 ublk -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:59.862 06:01:51 ublk -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87983' 00:14:59.862 06:01:51 ublk -- common/autotest_common.sh@967 -- # kill 87983 00:14:59.862 06:01:51 ublk -- common/autotest_common.sh@972 -- # wait 87983 00:15:00.120 [2024-07-13 06:01:51.616768] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:00.120 [2024-07-13 06:01:51.616836] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:00.120 00:15:00.120 real 0m17.952s 00:15:00.120 user 0m28.427s 00:15:00.120 sys 0m7.616s 00:15:00.120 06:01:51 ublk -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:00.120 ************************************ 00:15:00.120 END TEST ublk 00:15:00.120 ************************************ 00:15:00.120 06:01:51 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:00.379 06:01:51 -- common/autotest_common.sh@1142 -- # return 0 00:15:00.379 06:01:51 -- spdk/autotest.sh@252 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:00.379 06:01:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:00.379 06:01:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.379 06:01:51 -- common/autotest_common.sh@10 -- # set +x 00:15:00.379 ************************************ 00:15:00.379 START TEST ublk_recovery 00:15:00.379 ************************************ 00:15:00.379 06:01:51 ublk_recovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:00.379 * Looking for test storage... 00:15:00.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:00.379 06:01:51 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:00.379 06:01:51 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:00.379 06:01:51 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:00.379 06:01:51 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:00.379 06:01:51 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:00.379 06:01:51 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:00.379 06:01:51 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:00.379 06:01:51 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:00.380 06:01:51 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:00.380 06:01:51 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:15:00.380 06:01:51 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=88322 00:15:00.380 06:01:51 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:00.380 06:01:51 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:00.380 06:01:51 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 88322 00:15:00.380 06:01:51 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 88322 ']' 00:15:00.380 06:01:51 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.380 06:01:51 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:00.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.380 06:01:51 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.380 06:01:51 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:00.380 06:01:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:00.380 [2024-07-13 06:01:52.051554] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:15:00.380 [2024-07-13 06:01:52.051761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88322 ] 00:15:00.637 [2024-07-13 06:01:52.198619] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:00.637 [2024-07-13 06:01:52.234234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.638 [2024-07-13 06:01:52.234277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.213 06:01:52 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:01.213 06:01:52 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:15:01.213 06:01:52 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:15:01.213 06:01:52 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.213 06:01:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:01.213 [2024-07-13 06:01:52.923284] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:01.213 [2024-07-13 06:01:52.924600] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:01.213 06:01:52 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.213 06:01:52 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:01.213 06:01:52 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.213 06:01:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:01.473 malloc0 00:15:01.473 06:01:52 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.473 06:01:52 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:15:01.473 06:01:52 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.473 06:01:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:01.473 [2024-07-13 06:01:52.962408] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:15:01.473 [2024-07-13 06:01:52.962566] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:15:01.473 [2024-07-13 06:01:52.962585] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:01.473 [2024-07-13 06:01:52.962604] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:01.473 [2024-07-13 06:01:52.971333] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:01.473 [2024-07-13 06:01:52.971363] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:01.473 [2024-07-13 06:01:52.978278] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:01.473 [2024-07-13 06:01:52.978457] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:01.473 [2024-07-13 06:01:52.995293] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:01.473 1 00:15:01.473 06:01:53 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.473 06:01:53 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:15:02.408 06:01:54 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=88355 00:15:02.408 06:01:54 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:15:02.408 06:01:54 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:15:02.408 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:02.408 fio-3.35 00:15:02.408 Starting 1 process 00:15:07.678 06:01:59 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 88322 00:15:07.678 06:01:59 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:15:12.961 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 88322 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:15:12.961 06:02:04 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=88459 00:15:12.961 06:02:04 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:12.961 06:02:04 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:12.961 06:02:04 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 88459 00:15:12.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.961 06:02:04 ublk_recovery -- common/autotest_common.sh@829 -- # '[' -z 88459 ']' 00:15:12.961 06:02:04 ublk_recovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.961 06:02:04 ublk_recovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:12.961 06:02:04 ublk_recovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.961 06:02:04 ublk_recovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:12.961 06:02:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:12.961 [2024-07-13 06:02:04.127378] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:15:12.961 [2024-07-13 06:02:04.127850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88459 ] 00:15:12.961 [2024-07-13 06:02:04.288463] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:12.961 [2024-07-13 06:02:04.337275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.961 [2024-07-13 06:02:04.337332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.533 06:02:05 ublk_recovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:13.533 06:02:05 ublk_recovery -- common/autotest_common.sh@862 -- # return 0 00:15:13.533 06:02:05 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:15:13.533 06:02:05 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.533 06:02:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:13.533 [2024-07-13 06:02:05.095232] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:13.533 [2024-07-13 06:02:05.096476] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:13.533 06:02:05 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.533 06:02:05 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:13.533 06:02:05 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.533 06:02:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:13.533 malloc0 00:15:13.533 06:02:05 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.533 06:02:05 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:15:13.533 06:02:05 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.533 06:02:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:13.533 [2024-07-13 06:02:05.128876] ublk.c:2095:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:15:13.533 [2024-07-13 06:02:05.128948] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:13.533 [2024-07-13 06:02:05.128978] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:13.534 [2024-07-13 06:02:05.135296] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:13.534 [2024-07-13 06:02:05.135334] ublk.c:2024:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:15:13.534 1 00:15:13.534 [2024-07-13 06:02:05.135418] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:15:13.534 06:02:05 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.534 06:02:05 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 88355 00:15:13.534 [2024-07-13 06:02:05.142252] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:15:13.534 [2024-07-13 06:02:05.149896] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:15:13.534 [2024-07-13 06:02:05.157495] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:15:13.534 [2024-07-13 06:02:05.157555] ublk.c: 378:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:16:09.759 00:16:09.759 fio_test: (groupid=0, jobs=1): err= 0: pid=88358: Sat Jul 13 06:02:54 2024 00:16:09.759 read: IOPS=18.5k, BW=72.2MiB/s (75.7MB/s)(4332MiB/60002msec) 00:16:09.760 slat (usec): min=2, max=644, avg= 6.19, stdev= 2.90 00:16:09.760 clat (usec): min=1104, max=6156.9k, avg=3410.57, stdev=47104.06 00:16:09.760 lat (usec): min=1112, max=6156.9k, avg=3416.76, stdev=47104.05 00:16:09.760 clat percentiles (usec): 00:16:09.760 | 1.00th=[ 2474], 5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2769], 00:16:09.760 | 30.00th=[ 2835], 40.00th=[ 2900], 50.00th=[ 2966], 60.00th=[ 3032], 00:16:09.760 | 70.00th=[ 3064], 80.00th=[ 3130], 90.00th=[ 3261], 95.00th=[ 3982], 00:16:09.760 | 99.00th=[ 5604], 99.50th=[ 6194], 99.90th=[ 7898], 99.95th=[ 8717], 00:16:09.760 | 99.99th=[14353] 00:16:09.760 bw ( KiB/s): min= 5504, max=91576, per=100.00%, avg=81491.28, stdev=10078.78, samples=108 00:16:09.760 iops : min= 1376, max=22894, avg=20372.81, stdev=2519.70, samples=108 00:16:09.760 write: IOPS=18.5k, BW=72.2MiB/s (75.7MB/s)(4330MiB/60002msec); 0 zone resets 00:16:09.760 slat (usec): min=2, max=748, avg= 6.27, stdev= 3.05 00:16:09.760 clat (usec): min=1006, max=6157.0k, avg=3502.69, stdev=46386.45 00:16:09.760 lat (usec): min=1047, max=6157.0k, avg=3508.96, stdev=46386.44 00:16:09.760 clat percentiles (usec): 00:16:09.760 | 1.00th=[ 2540], 5.00th=[ 2737], 10.00th=[ 2802], 20.00th=[ 2868], 00:16:09.760 | 30.00th=[ 2933], 40.00th=[ 2999], 50.00th=[ 3064], 60.00th=[ 3130], 00:16:09.760 | 70.00th=[ 3195], 80.00th=[ 3261], 90.00th=[ 3359], 95.00th=[ 3818], 00:16:09.760 | 99.00th=[ 5604], 99.50th=[ 6259], 99.90th=[ 8029], 99.95th=[ 8848], 00:16:09.760 | 99.99th=[14353] 00:16:09.760 bw ( KiB/s): min= 5392, max=90872, per=100.00%, avg=81462.10, stdev=10102.87, samples=108 00:16:09.760 iops : min= 1348, max=22718, avg=20365.52, stdev=2525.72, samples=108 00:16:09.760 lat (msec) : 2=0.08%, 4=95.23%, 10=4.66%, 20=0.02%, >=2000=0.01% 00:16:09.760 cpu : usr=9.68%, sys=21.33%, ctx=67416, majf=0, minf=13 00:16:09.760 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:16:09.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:09.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:09.760 issued rwts: total=1108973,1108428,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:09.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:09.760 00:16:09.760 Run status group 0 (all jobs): 00:16:09.760 READ: bw=72.2MiB/s (75.7MB/s), 72.2MiB/s-72.2MiB/s (75.7MB/s-75.7MB/s), io=4332MiB (4542MB), run=60002-60002msec 00:16:09.760 WRITE: bw=72.2MiB/s (75.7MB/s), 72.2MiB/s-72.2MiB/s (75.7MB/s-75.7MB/s), io=4330MiB (4540MB), run=60002-60002msec 00:16:09.760 00:16:09.760 Disk stats (read/write): 00:16:09.760 ublkb1: ios=1106665/1106209, merge=0/0, ticks=3679288/3659864, in_queue=7339152, util=99.94% 00:16:09.760 06:02:54 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:16:09.760 06:02:54 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.760 06:02:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:09.760 [2024-07-13 06:02:54.267478] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:09.760 [2024-07-13 06:02:54.303220] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:09.760 [2024-07-13 06:02:54.303698] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:09.760 [2024-07-13 06:02:54.311224] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:09.760 [2024-07-13 06:02:54.311371] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:09.760 [2024-07-13 06:02:54.311388] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:09.760 06:02:54 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.760 06:02:54 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:16:09.760 06:02:54 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.760 06:02:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:09.760 [2024-07-13 06:02:54.326350] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:16:09.760 [2024-07-13 06:02:54.327690] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:16:09.760 [2024-07-13 06:02:54.327753] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:09.760 06:02:54 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.760 06:02:54 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:16:09.760 06:02:54 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:16:09.760 06:02:54 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 88459 00:16:09.760 06:02:54 ublk_recovery -- common/autotest_common.sh@948 -- # '[' -z 88459 ']' 00:16:09.760 06:02:54 ublk_recovery -- common/autotest_common.sh@952 -- # kill -0 88459 00:16:09.760 06:02:54 ublk_recovery -- common/autotest_common.sh@953 -- # uname 00:16:09.760 06:02:54 ublk_recovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:09.760 06:02:54 ublk_recovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88459 00:16:09.760 killing process with pid 88459 00:16:09.760 06:02:54 ublk_recovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:09.760 06:02:54 ublk_recovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:09.760 06:02:54 ublk_recovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88459' 00:16:09.760 06:02:54 ublk_recovery -- common/autotest_common.sh@967 -- # kill 88459 00:16:09.760 06:02:54 ublk_recovery -- common/autotest_common.sh@972 -- # wait 88459 00:16:09.760 [2024-07-13 06:02:54.465082] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:16:09.760 [2024-07-13 06:02:54.465242] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:16:09.760 00:16:09.760 real 1m2.829s 00:16:09.760 user 1m43.913s 00:16:09.760 sys 0m29.943s 00:16:09.760 06:02:54 ublk_recovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:09.760 ************************************ 00:16:09.760 06:02:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:09.760 END TEST ublk_recovery 00:16:09.760 ************************************ 00:16:09.760 06:02:54 -- common/autotest_common.sh@1142 -- # return 0 00:16:09.760 06:02:54 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:16:09.760 06:02:54 -- spdk/autotest.sh@260 -- # timing_exit lib 00:16:09.760 06:02:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:09.760 06:02:54 -- common/autotest_common.sh@10 -- # set +x 00:16:09.760 06:02:54 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:16:09.760 06:02:54 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:16:09.760 06:02:54 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:16:09.760 06:02:54 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:16:09.760 06:02:54 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:16:09.760 06:02:54 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:16:09.760 06:02:54 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:16:09.760 06:02:54 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:16:09.760 06:02:54 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:16:09.760 06:02:54 -- spdk/autotest.sh@339 -- # '[' 1 -eq 1 ']' 00:16:09.760 06:02:54 -- spdk/autotest.sh@340 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:09.760 06:02:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:09.760 06:02:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:09.760 06:02:54 -- common/autotest_common.sh@10 -- # set +x 00:16:09.760 ************************************ 00:16:09.760 START TEST ftl 00:16:09.760 ************************************ 00:16:09.760 06:02:54 ftl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:09.760 * Looking for test storage... 00:16:09.760 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:09.760 06:02:54 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:09.760 06:02:54 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:09.760 06:02:54 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:09.760 06:02:54 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:09.760 06:02:54 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:09.760 06:02:54 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:09.760 06:02:54 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:09.760 06:02:54 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:09.760 06:02:54 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:09.760 06:02:54 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:09.760 06:02:54 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:09.760 06:02:54 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:09.760 06:02:54 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:09.760 06:02:54 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:09.760 06:02:54 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:09.760 06:02:54 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:09.760 06:02:54 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:09.760 06:02:54 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:09.760 06:02:54 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:09.760 06:02:54 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:09.760 06:02:54 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:09.760 06:02:54 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:09.760 06:02:54 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:09.760 06:02:54 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:09.760 06:02:54 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:09.760 06:02:54 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:09.760 06:02:54 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:09.760 06:02:54 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:09.760 06:02:54 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:09.760 06:02:54 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:09.760 06:02:54 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:16:09.760 06:02:54 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:16:09.760 06:02:54 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:16:09.760 06:02:54 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:16:09.760 06:02:54 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:09.760 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:09.760 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:09.760 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:09.760 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:09.760 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:09.760 06:02:55 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=89229 00:16:09.760 06:02:55 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:09.760 06:02:55 ftl -- ftl/ftl.sh@38 -- # waitforlisten 89229 00:16:09.760 06:02:55 ftl -- common/autotest_common.sh@829 -- # '[' -z 89229 ']' 00:16:09.760 06:02:55 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.761 06:02:55 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:09.761 06:02:55 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.761 06:02:55 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:09.761 06:02:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:09.761 [2024-07-13 06:02:55.539269] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:16:09.761 [2024-07-13 06:02:55.539459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89229 ] 00:16:09.761 [2024-07-13 06:02:55.692006] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.761 [2024-07-13 06:02:55.735757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.761 06:02:56 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:09.761 06:02:56 ftl -- common/autotest_common.sh@862 -- # return 0 00:16:09.761 06:02:56 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:16:09.761 06:02:56 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:09.761 06:02:57 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:16:09.761 06:02:57 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:09.761 06:02:57 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:16:09.761 06:02:57 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:09.761 06:02:57 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:09.761 06:02:57 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:16:09.761 06:02:57 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:16:09.761 06:02:57 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:16:09.761 06:02:57 ftl -- ftl/ftl.sh@50 -- # break 00:16:09.761 06:02:57 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:16:09.761 06:02:57 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:16:09.761 06:02:57 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:09.761 06:02:57 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:09.761 06:02:58 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:16:09.761 06:02:58 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:16:09.761 06:02:58 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:16:09.761 06:02:58 ftl -- ftl/ftl.sh@63 -- # break 00:16:09.761 06:02:58 ftl -- ftl/ftl.sh@66 -- # killprocess 89229 00:16:09.761 06:02:58 ftl -- common/autotest_common.sh@948 -- # '[' -z 89229 ']' 00:16:09.761 06:02:58 ftl -- common/autotest_common.sh@952 -- # kill -0 89229 00:16:09.761 06:02:58 ftl -- common/autotest_common.sh@953 -- # uname 00:16:09.761 06:02:58 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:09.761 06:02:58 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89229 00:16:09.761 06:02:58 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:09.761 killing process with pid 89229 00:16:09.761 06:02:58 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:09.761 06:02:58 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89229' 00:16:09.761 06:02:58 ftl -- common/autotest_common.sh@967 -- # kill 89229 00:16:09.761 06:02:58 ftl -- common/autotest_common.sh@972 -- # wait 89229 00:16:09.761 06:02:58 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:16:09.761 06:02:58 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:09.761 06:02:58 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:09.761 06:02:58 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:09.761 06:02:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:09.761 ************************************ 00:16:09.761 START TEST ftl_fio_basic 00:16:09.761 ************************************ 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:09.761 * Looking for test storage... 00:16:09.761 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=89342 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 89342 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- common/autotest_common.sh@829 -- # '[' -z 89342 ']' 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:09.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:09.761 06:02:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:09.761 [2024-07-13 06:02:58.642960] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:16:09.761 [2024-07-13 06:02:58.643175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89342 ] 00:16:09.761 [2024-07-13 06:02:58.791841] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:09.761 [2024-07-13 06:02:58.829962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.761 [2024-07-13 06:02:58.830031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.761 [2024-07-13 06:02:58.830126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:09.761 06:02:59 ftl.ftl_fio_basic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:09.761 06:02:59 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # return 0 00:16:09.761 06:02:59 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:09.761 06:02:59 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:16:09.761 06:02:59 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:09.761 06:02:59 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:16:09.761 06:02:59 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:16:09.761 06:02:59 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:09.761 06:02:59 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:09.761 06:02:59 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:16:09.761 06:02:59 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:09.761 06:02:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:09.761 06:02:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:09.761 06:02:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:09.761 06:02:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:09.762 06:02:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:09.762 06:03:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:09.762 { 00:16:09.762 "name": "nvme0n1", 00:16:09.762 "aliases": [ 00:16:09.762 "6574a23e-1c56-45cf-8d92-20afbffdd28f" 00:16:09.762 ], 00:16:09.762 "product_name": "NVMe disk", 00:16:09.762 "block_size": 4096, 00:16:09.762 "num_blocks": 1310720, 00:16:09.762 "uuid": "6574a23e-1c56-45cf-8d92-20afbffdd28f", 00:16:09.762 "assigned_rate_limits": { 00:16:09.762 "rw_ios_per_sec": 0, 00:16:09.762 "rw_mbytes_per_sec": 0, 00:16:09.762 "r_mbytes_per_sec": 0, 00:16:09.762 "w_mbytes_per_sec": 0 00:16:09.762 }, 00:16:09.762 "claimed": false, 00:16:09.762 "zoned": false, 00:16:09.762 "supported_io_types": { 00:16:09.762 "read": true, 00:16:09.762 "write": true, 00:16:09.762 "unmap": true, 00:16:09.762 "flush": true, 00:16:09.762 "reset": true, 00:16:09.762 "nvme_admin": true, 00:16:09.762 "nvme_io": true, 00:16:09.762 "nvme_io_md": false, 00:16:09.762 "write_zeroes": true, 00:16:09.762 "zcopy": false, 00:16:09.762 "get_zone_info": false, 00:16:09.762 "zone_management": false, 00:16:09.762 "zone_append": false, 00:16:09.762 "compare": true, 00:16:09.762 "compare_and_write": false, 00:16:09.762 "abort": true, 00:16:09.762 "seek_hole": false, 00:16:09.762 "seek_data": false, 00:16:09.762 "copy": true, 00:16:09.762 "nvme_iov_md": false 00:16:09.762 }, 00:16:09.762 "driver_specific": { 00:16:09.762 "nvme": [ 00:16:09.762 { 00:16:09.762 "pci_address": "0000:00:11.0", 00:16:09.762 "trid": { 00:16:09.762 "trtype": "PCIe", 00:16:09.762 "traddr": "0000:00:11.0" 00:16:09.762 }, 00:16:09.762 "ctrlr_data": { 00:16:09.762 "cntlid": 0, 00:16:09.762 "vendor_id": "0x1b36", 00:16:09.762 "model_number": "QEMU NVMe Ctrl", 00:16:09.762 "serial_number": "12341", 00:16:09.762 "firmware_revision": "8.0.0", 00:16:09.762 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:09.762 "oacs": { 00:16:09.762 "security": 0, 00:16:09.762 "format": 1, 00:16:09.762 "firmware": 0, 00:16:09.762 "ns_manage": 1 00:16:09.762 }, 00:16:09.762 "multi_ctrlr": false, 00:16:09.762 "ana_reporting": false 00:16:09.762 }, 00:16:09.762 "vs": { 00:16:09.762 "nvme_version": "1.4" 00:16:09.762 }, 00:16:09.762 "ns_data": { 00:16:09.762 "id": 1, 00:16:09.762 "can_share": false 00:16:09.762 } 00:16:09.762 } 00:16:09.762 ], 00:16:09.762 "mp_policy": "active_passive" 00:16:09.762 } 00:16:09.762 } 00:16:09.762 ]' 00:16:09.762 06:03:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:09.762 06:03:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:09.762 06:03:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:09.762 06:03:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:09.762 06:03:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:09.762 06:03:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:16:09.762 06:03:00 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:16:09.762 06:03:00 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:09.762 06:03:00 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:16:09.762 06:03:00 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:09.762 06:03:00 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:09.762 06:03:00 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:16:09.762 06:03:00 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:09.762 06:03:00 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=1977d6ee-f77f-4b2c-93df-1de307ca552c 00:16:09.762 06:03:00 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1977d6ee-f77f-4b2c-93df-1de307ca552c 00:16:09.762 06:03:00 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=59fae9a1-282b-4ab5-8fe9-ec7b1449efa1 00:16:09.762 06:03:00 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 59fae9a1-282b-4ab5-8fe9-ec7b1449efa1 00:16:09.762 06:03:00 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:16:09.762 06:03:00 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:09.762 06:03:00 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=59fae9a1-282b-4ab5-8fe9-ec7b1449efa1 00:16:09.762 06:03:00 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:16:09.762 06:03:00 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 59fae9a1-282b-4ab5-8fe9-ec7b1449efa1 00:16:09.762 06:03:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=59fae9a1-282b-4ab5-8fe9-ec7b1449efa1 00:16:09.762 06:03:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:09.762 06:03:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:09.762 06:03:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:09.762 06:03:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 59fae9a1-282b-4ab5-8fe9-ec7b1449efa1 00:16:09.762 06:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:09.762 { 00:16:09.762 "name": "59fae9a1-282b-4ab5-8fe9-ec7b1449efa1", 00:16:09.762 "aliases": [ 00:16:09.762 "lvs/nvme0n1p0" 00:16:09.762 ], 00:16:09.762 "product_name": "Logical Volume", 00:16:09.762 "block_size": 4096, 00:16:09.762 "num_blocks": 26476544, 00:16:09.762 "uuid": "59fae9a1-282b-4ab5-8fe9-ec7b1449efa1", 00:16:09.762 "assigned_rate_limits": { 00:16:09.762 "rw_ios_per_sec": 0, 00:16:09.762 "rw_mbytes_per_sec": 0, 00:16:09.762 "r_mbytes_per_sec": 0, 00:16:09.762 "w_mbytes_per_sec": 0 00:16:09.762 }, 00:16:09.762 "claimed": false, 00:16:09.762 "zoned": false, 00:16:09.762 "supported_io_types": { 00:16:09.762 "read": true, 00:16:09.762 "write": true, 00:16:09.762 "unmap": true, 00:16:09.762 "flush": false, 00:16:09.762 "reset": true, 00:16:09.762 "nvme_admin": false, 00:16:09.762 "nvme_io": false, 00:16:09.762 "nvme_io_md": false, 00:16:09.762 "write_zeroes": true, 00:16:09.762 "zcopy": false, 00:16:09.762 "get_zone_info": false, 00:16:09.762 "zone_management": false, 00:16:09.762 "zone_append": false, 00:16:09.762 "compare": false, 00:16:09.762 "compare_and_write": false, 00:16:09.762 "abort": false, 00:16:09.762 "seek_hole": true, 00:16:09.762 "seek_data": true, 00:16:09.762 "copy": false, 00:16:09.762 "nvme_iov_md": false 00:16:09.762 }, 00:16:09.762 "driver_specific": { 00:16:09.762 "lvol": { 00:16:09.762 "lvol_store_uuid": "1977d6ee-f77f-4b2c-93df-1de307ca552c", 00:16:09.762 "base_bdev": "nvme0n1", 00:16:09.762 "thin_provision": true, 00:16:09.762 "num_allocated_clusters": 0, 00:16:09.762 "snapshot": false, 00:16:09.762 "clone": false, 00:16:09.762 "esnap_clone": false 00:16:09.762 } 00:16:09.762 } 00:16:09.762 } 00:16:09.762 ]' 00:16:09.762 06:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:09.762 06:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:09.762 06:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:09.762 06:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:09.762 06:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:09.762 06:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:09.762 06:03:01 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:16:09.762 06:03:01 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:16:09.762 06:03:01 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:10.024 06:03:01 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:10.024 06:03:01 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:10.024 06:03:01 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 59fae9a1-282b-4ab5-8fe9-ec7b1449efa1 00:16:10.024 06:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=59fae9a1-282b-4ab5-8fe9-ec7b1449efa1 00:16:10.024 06:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:10.024 06:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:10.024 06:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:10.024 06:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 59fae9a1-282b-4ab5-8fe9-ec7b1449efa1 00:16:10.300 06:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:10.300 { 00:16:10.300 "name": "59fae9a1-282b-4ab5-8fe9-ec7b1449efa1", 00:16:10.300 "aliases": [ 00:16:10.300 "lvs/nvme0n1p0" 00:16:10.300 ], 00:16:10.300 "product_name": "Logical Volume", 00:16:10.300 "block_size": 4096, 00:16:10.300 "num_blocks": 26476544, 00:16:10.300 "uuid": "59fae9a1-282b-4ab5-8fe9-ec7b1449efa1", 00:16:10.300 "assigned_rate_limits": { 00:16:10.300 "rw_ios_per_sec": 0, 00:16:10.300 "rw_mbytes_per_sec": 0, 00:16:10.300 "r_mbytes_per_sec": 0, 00:16:10.300 "w_mbytes_per_sec": 0 00:16:10.300 }, 00:16:10.300 "claimed": false, 00:16:10.300 "zoned": false, 00:16:10.300 "supported_io_types": { 00:16:10.300 "read": true, 00:16:10.300 "write": true, 00:16:10.300 "unmap": true, 00:16:10.300 "flush": false, 00:16:10.300 "reset": true, 00:16:10.300 "nvme_admin": false, 00:16:10.300 "nvme_io": false, 00:16:10.300 "nvme_io_md": false, 00:16:10.300 "write_zeroes": true, 00:16:10.300 "zcopy": false, 00:16:10.300 "get_zone_info": false, 00:16:10.300 "zone_management": false, 00:16:10.300 "zone_append": false, 00:16:10.300 "compare": false, 00:16:10.300 "compare_and_write": false, 00:16:10.300 "abort": false, 00:16:10.300 "seek_hole": true, 00:16:10.300 "seek_data": true, 00:16:10.300 "copy": false, 00:16:10.300 "nvme_iov_md": false 00:16:10.300 }, 00:16:10.300 "driver_specific": { 00:16:10.300 "lvol": { 00:16:10.300 "lvol_store_uuid": "1977d6ee-f77f-4b2c-93df-1de307ca552c", 00:16:10.300 "base_bdev": "nvme0n1", 00:16:10.300 "thin_provision": true, 00:16:10.300 "num_allocated_clusters": 0, 00:16:10.300 "snapshot": false, 00:16:10.300 "clone": false, 00:16:10.300 "esnap_clone": false 00:16:10.300 } 00:16:10.300 } 00:16:10.300 } 00:16:10.300 ]' 00:16:10.300 06:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:10.300 06:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:10.300 06:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:10.300 06:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:10.300 06:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:10.300 06:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:10.300 06:03:01 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:16:10.300 06:03:01 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:10.595 06:03:02 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:16:10.595 06:03:02 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:16:10.595 06:03:02 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:16:10.595 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:16:10.595 06:03:02 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 59fae9a1-282b-4ab5-8fe9-ec7b1449efa1 00:16:10.595 06:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=59fae9a1-282b-4ab5-8fe9-ec7b1449efa1 00:16:10.595 06:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:10.595 06:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:10.595 06:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:10.595 06:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 59fae9a1-282b-4ab5-8fe9-ec7b1449efa1 00:16:10.858 06:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:10.858 { 00:16:10.858 "name": "59fae9a1-282b-4ab5-8fe9-ec7b1449efa1", 00:16:10.858 "aliases": [ 00:16:10.858 "lvs/nvme0n1p0" 00:16:10.858 ], 00:16:10.858 "product_name": "Logical Volume", 00:16:10.858 "block_size": 4096, 00:16:10.858 "num_blocks": 26476544, 00:16:10.858 "uuid": "59fae9a1-282b-4ab5-8fe9-ec7b1449efa1", 00:16:10.858 "assigned_rate_limits": { 00:16:10.858 "rw_ios_per_sec": 0, 00:16:10.858 "rw_mbytes_per_sec": 0, 00:16:10.858 "r_mbytes_per_sec": 0, 00:16:10.858 "w_mbytes_per_sec": 0 00:16:10.858 }, 00:16:10.858 "claimed": false, 00:16:10.858 "zoned": false, 00:16:10.858 "supported_io_types": { 00:16:10.858 "read": true, 00:16:10.858 "write": true, 00:16:10.858 "unmap": true, 00:16:10.858 "flush": false, 00:16:10.858 "reset": true, 00:16:10.858 "nvme_admin": false, 00:16:10.858 "nvme_io": false, 00:16:10.858 "nvme_io_md": false, 00:16:10.858 "write_zeroes": true, 00:16:10.858 "zcopy": false, 00:16:10.858 "get_zone_info": false, 00:16:10.858 "zone_management": false, 00:16:10.858 "zone_append": false, 00:16:10.858 "compare": false, 00:16:10.858 "compare_and_write": false, 00:16:10.858 "abort": false, 00:16:10.858 "seek_hole": true, 00:16:10.858 "seek_data": true, 00:16:10.858 "copy": false, 00:16:10.858 "nvme_iov_md": false 00:16:10.858 }, 00:16:10.858 "driver_specific": { 00:16:10.858 "lvol": { 00:16:10.858 "lvol_store_uuid": "1977d6ee-f77f-4b2c-93df-1de307ca552c", 00:16:10.858 "base_bdev": "nvme0n1", 00:16:10.858 "thin_provision": true, 00:16:10.858 "num_allocated_clusters": 0, 00:16:10.858 "snapshot": false, 00:16:10.858 "clone": false, 00:16:10.858 "esnap_clone": false 00:16:10.858 } 00:16:10.858 } 00:16:10.858 } 00:16:10.858 ]' 00:16:10.858 06:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:10.858 06:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:10.858 06:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:10.858 06:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:10.858 06:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:10.858 06:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:10.858 06:03:02 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:16:10.858 06:03:02 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:16:10.858 06:03:02 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 59fae9a1-282b-4ab5-8fe9-ec7b1449efa1 -c nvc0n1p0 --l2p_dram_limit 60 00:16:11.117 [2024-07-13 06:03:02.769512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.117 [2024-07-13 06:03:02.769568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:11.117 [2024-07-13 06:03:02.769612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:11.117 [2024-07-13 06:03:02.769626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.117 [2024-07-13 06:03:02.769734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.117 [2024-07-13 06:03:02.769768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:11.117 [2024-07-13 06:03:02.769787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:11.117 [2024-07-13 06:03:02.769801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.117 [2024-07-13 06:03:02.769862] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:11.117 [2024-07-13 06:03:02.770207] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:11.117 [2024-07-13 06:03:02.770237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.117 [2024-07-13 06:03:02.770251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:11.117 [2024-07-13 06:03:02.770267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:16:11.117 [2024-07-13 06:03:02.770280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.117 [2024-07-13 06:03:02.770485] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1d1887a0-2646-4779-9036-7e2a3e28e62a 00:16:11.117 [2024-07-13 06:03:02.771546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.117 [2024-07-13 06:03:02.771586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:11.117 [2024-07-13 06:03:02.771603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:16:11.117 [2024-07-13 06:03:02.771619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.117 [2024-07-13 06:03:02.776289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.117 [2024-07-13 06:03:02.776359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:11.117 [2024-07-13 06:03:02.776378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.592 ms 00:16:11.117 [2024-07-13 06:03:02.776397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.117 [2024-07-13 06:03:02.776541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.117 [2024-07-13 06:03:02.776568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:11.117 [2024-07-13 06:03:02.776583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:16:11.117 [2024-07-13 06:03:02.776601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.117 [2024-07-13 06:03:02.776699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.117 [2024-07-13 06:03:02.776721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:11.117 [2024-07-13 06:03:02.776735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:16:11.117 [2024-07-13 06:03:02.776750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.117 [2024-07-13 06:03:02.776793] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:11.117 [2024-07-13 06:03:02.778433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.117 [2024-07-13 06:03:02.778473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:11.117 [2024-07-13 06:03:02.778493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.644 ms 00:16:11.117 [2024-07-13 06:03:02.778509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.117 [2024-07-13 06:03:02.778571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.117 [2024-07-13 06:03:02.778605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:11.117 [2024-07-13 06:03:02.778623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:11.118 [2024-07-13 06:03:02.778636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.118 [2024-07-13 06:03:02.778679] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:11.118 [2024-07-13 06:03:02.778848] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:11.118 [2024-07-13 06:03:02.778892] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:11.118 [2024-07-13 06:03:02.778913] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:16:11.118 [2024-07-13 06:03:02.778933] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:11.118 [2024-07-13 06:03:02.778952] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:11.118 [2024-07-13 06:03:02.778968] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:11.118 [2024-07-13 06:03:02.778980] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:11.118 [2024-07-13 06:03:02.778994] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:11.118 [2024-07-13 06:03:02.779006] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:11.118 [2024-07-13 06:03:02.779022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.118 [2024-07-13 06:03:02.779035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:11.118 [2024-07-13 06:03:02.779068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:16:11.118 [2024-07-13 06:03:02.779081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.118 [2024-07-13 06:03:02.779226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.118 [2024-07-13 06:03:02.779245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:11.118 [2024-07-13 06:03:02.779286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:16:11.118 [2024-07-13 06:03:02.779299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.118 [2024-07-13 06:03:02.779442] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:11.118 [2024-07-13 06:03:02.779460] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:11.118 [2024-07-13 06:03:02.779477] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:11.118 [2024-07-13 06:03:02.779490] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:11.118 [2024-07-13 06:03:02.779505] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:11.118 [2024-07-13 06:03:02.779517] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:11.118 [2024-07-13 06:03:02.779531] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:11.118 [2024-07-13 06:03:02.779543] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:11.118 [2024-07-13 06:03:02.779557] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:11.118 [2024-07-13 06:03:02.779569] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:11.118 [2024-07-13 06:03:02.779583] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:11.118 [2024-07-13 06:03:02.779595] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:11.118 [2024-07-13 06:03:02.779609] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:11.118 [2024-07-13 06:03:02.779621] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:11.118 [2024-07-13 06:03:02.779637] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:11.118 [2024-07-13 06:03:02.779649] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:11.118 [2024-07-13 06:03:02.779663] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:11.118 [2024-07-13 06:03:02.779675] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:11.118 [2024-07-13 06:03:02.779689] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:11.118 [2024-07-13 06:03:02.779701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:11.118 [2024-07-13 06:03:02.779719] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:11.118 [2024-07-13 06:03:02.779732] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:11.118 [2024-07-13 06:03:02.779746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:11.118 [2024-07-13 06:03:02.779758] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:11.118 [2024-07-13 06:03:02.779774] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:11.118 [2024-07-13 06:03:02.779786] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:11.118 [2024-07-13 06:03:02.779800] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:11.118 [2024-07-13 06:03:02.779812] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:11.118 [2024-07-13 06:03:02.779826] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:11.118 [2024-07-13 06:03:02.779838] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:11.118 [2024-07-13 06:03:02.779853] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:11.118 [2024-07-13 06:03:02.779865] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:11.118 [2024-07-13 06:03:02.779879] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:11.118 [2024-07-13 06:03:02.779891] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:11.118 [2024-07-13 06:03:02.779904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:11.118 [2024-07-13 06:03:02.779917] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:11.118 [2024-07-13 06:03:02.779931] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:11.118 [2024-07-13 06:03:02.779942] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:11.118 [2024-07-13 06:03:02.779956] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:11.118 [2024-07-13 06:03:02.779968] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:11.118 [2024-07-13 06:03:02.779981] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:11.118 [2024-07-13 06:03:02.779994] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:11.118 [2024-07-13 06:03:02.780007] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:11.118 [2024-07-13 06:03:02.780019] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:11.118 [2024-07-13 06:03:02.780033] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:11.118 [2024-07-13 06:03:02.780045] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:11.118 [2024-07-13 06:03:02.780065] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:11.118 [2024-07-13 06:03:02.780078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:11.118 [2024-07-13 06:03:02.780094] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:11.118 [2024-07-13 06:03:02.780106] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:11.118 [2024-07-13 06:03:02.780120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:11.118 [2024-07-13 06:03:02.780147] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:11.118 [2024-07-13 06:03:02.780168] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:11.118 [2024-07-13 06:03:02.780186] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:11.118 [2024-07-13 06:03:02.780204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:11.118 [2024-07-13 06:03:02.780235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:11.118 [2024-07-13 06:03:02.780251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:11.118 [2024-07-13 06:03:02.780265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:11.118 [2024-07-13 06:03:02.780279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:11.118 [2024-07-13 06:03:02.780292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:11.118 [2024-07-13 06:03:02.780306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:11.118 [2024-07-13 06:03:02.780319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:11.118 [2024-07-13 06:03:02.780335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:11.118 [2024-07-13 06:03:02.780347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:11.118 [2024-07-13 06:03:02.780362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:11.118 [2024-07-13 06:03:02.780375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:11.118 [2024-07-13 06:03:02.780389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:11.118 [2024-07-13 06:03:02.780401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:11.118 [2024-07-13 06:03:02.780416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:11.118 [2024-07-13 06:03:02.780428] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:11.118 [2024-07-13 06:03:02.780444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:11.118 [2024-07-13 06:03:02.780457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:11.118 [2024-07-13 06:03:02.780473] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:11.118 [2024-07-13 06:03:02.780486] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:11.118 [2024-07-13 06:03:02.780500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:11.118 [2024-07-13 06:03:02.780514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:11.118 [2024-07-13 06:03:02.780529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:11.118 [2024-07-13 06:03:02.780558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.144 ms 00:16:11.118 [2024-07-13 06:03:02.780591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:11.118 [2024-07-13 06:03:02.780691] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:11.118 [2024-07-13 06:03:02.780717] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:13.645 [2024-07-13 06:03:04.990173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.645 [2024-07-13 06:03:04.990264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:13.645 [2024-07-13 06:03:04.990298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2209.496 ms 00:16:13.645 [2024-07-13 06:03:04.990315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.645 [2024-07-13 06:03:04.997999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.645 [2024-07-13 06:03:04.998074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:13.645 [2024-07-13 06:03:04.998115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.584 ms 00:16:13.645 [2024-07-13 06:03:04.998131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.645 [2024-07-13 06:03:04.998359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.645 [2024-07-13 06:03:04.998388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:13.645 [2024-07-13 06:03:04.998404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:16:13.645 [2024-07-13 06:03:04.998419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.645 [2024-07-13 06:03:05.014638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.646 [2024-07-13 06:03:05.014725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:13.646 [2024-07-13 06:03:05.014766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.166 ms 00:16:13.646 [2024-07-13 06:03:05.014782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.646 [2024-07-13 06:03:05.014835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.646 [2024-07-13 06:03:05.014857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:13.646 [2024-07-13 06:03:05.014871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:13.646 [2024-07-13 06:03:05.014885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.646 [2024-07-13 06:03:05.015332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.646 [2024-07-13 06:03:05.015383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:13.646 [2024-07-13 06:03:05.015400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:16:13.646 [2024-07-13 06:03:05.015420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.646 [2024-07-13 06:03:05.015593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.646 [2024-07-13 06:03:05.015628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:13.646 [2024-07-13 06:03:05.015644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:16:13.646 [2024-07-13 06:03:05.015659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.646 [2024-07-13 06:03:05.022393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.646 [2024-07-13 06:03:05.022461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:13.646 [2024-07-13 06:03:05.022514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.697 ms 00:16:13.646 [2024-07-13 06:03:05.022540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.646 [2024-07-13 06:03:05.032962] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:13.646 [2024-07-13 06:03:05.047182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.646 [2024-07-13 06:03:05.047261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:13.646 [2024-07-13 06:03:05.047303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.499 ms 00:16:13.646 [2024-07-13 06:03:05.047317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.646 [2024-07-13 06:03:05.083650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.646 [2024-07-13 06:03:05.083739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:13.646 [2024-07-13 06:03:05.083782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.260 ms 00:16:13.646 [2024-07-13 06:03:05.083796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.646 [2024-07-13 06:03:05.084055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.646 [2024-07-13 06:03:05.084086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:13.646 [2024-07-13 06:03:05.084106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:16:13.646 [2024-07-13 06:03:05.084119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.646 [2024-07-13 06:03:05.087869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.646 [2024-07-13 06:03:05.087932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:13.646 [2024-07-13 06:03:05.087954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.672 ms 00:16:13.646 [2024-07-13 06:03:05.087968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.646 [2024-07-13 06:03:05.091315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.646 [2024-07-13 06:03:05.091356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:13.646 [2024-07-13 06:03:05.091378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.286 ms 00:16:13.646 [2024-07-13 06:03:05.091391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.646 [2024-07-13 06:03:05.091762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.646 [2024-07-13 06:03:05.091791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:13.646 [2024-07-13 06:03:05.091810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:16:13.646 [2024-07-13 06:03:05.091823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.646 [2024-07-13 06:03:05.121035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.646 [2024-07-13 06:03:05.121096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:13.646 [2024-07-13 06:03:05.121122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.163 ms 00:16:13.646 [2024-07-13 06:03:05.121152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.646 [2024-07-13 06:03:05.125546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.646 [2024-07-13 06:03:05.125599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:13.646 [2024-07-13 06:03:05.125637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.315 ms 00:16:13.646 [2024-07-13 06:03:05.125654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.646 [2024-07-13 06:03:05.129391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.646 [2024-07-13 06:03:05.129432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:13.646 [2024-07-13 06:03:05.129453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.679 ms 00:16:13.646 [2024-07-13 06:03:05.129466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.646 [2024-07-13 06:03:05.133596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.646 [2024-07-13 06:03:05.133667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:13.646 [2024-07-13 06:03:05.133689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.070 ms 00:16:13.646 [2024-07-13 06:03:05.133703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.646 [2024-07-13 06:03:05.133769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.646 [2024-07-13 06:03:05.133788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:13.646 [2024-07-13 06:03:05.133805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:13.646 [2024-07-13 06:03:05.133818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.646 [2024-07-13 06:03:05.133926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.646 [2024-07-13 06:03:05.133947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:13.646 [2024-07-13 06:03:05.133974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:16:13.646 [2024-07-13 06:03:05.133988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.646 [2024-07-13 06:03:05.135236] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2365.175 ms, result 0 00:16:13.646 { 00:16:13.646 "name": "ftl0", 00:16:13.646 "uuid": "1d1887a0-2646-4779-9036-7e2a3e28e62a" 00:16:13.646 } 00:16:13.646 06:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:16:13.646 06:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:16:13.646 06:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:13.646 06:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local i 00:16:13.646 06:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:13.646 06:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:13.646 06:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:13.904 06:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:14.162 [ 00:16:14.162 { 00:16:14.162 "name": "ftl0", 00:16:14.162 "aliases": [ 00:16:14.162 "1d1887a0-2646-4779-9036-7e2a3e28e62a" 00:16:14.162 ], 00:16:14.162 "product_name": "FTL disk", 00:16:14.162 "block_size": 4096, 00:16:14.162 "num_blocks": 20971520, 00:16:14.162 "uuid": "1d1887a0-2646-4779-9036-7e2a3e28e62a", 00:16:14.162 "assigned_rate_limits": { 00:16:14.162 "rw_ios_per_sec": 0, 00:16:14.162 "rw_mbytes_per_sec": 0, 00:16:14.162 "r_mbytes_per_sec": 0, 00:16:14.162 "w_mbytes_per_sec": 0 00:16:14.162 }, 00:16:14.162 "claimed": false, 00:16:14.162 "zoned": false, 00:16:14.162 "supported_io_types": { 00:16:14.162 "read": true, 00:16:14.162 "write": true, 00:16:14.162 "unmap": true, 00:16:14.162 "flush": true, 00:16:14.162 "reset": false, 00:16:14.162 "nvme_admin": false, 00:16:14.162 "nvme_io": false, 00:16:14.162 "nvme_io_md": false, 00:16:14.162 "write_zeroes": true, 00:16:14.162 "zcopy": false, 00:16:14.162 "get_zone_info": false, 00:16:14.162 "zone_management": false, 00:16:14.162 "zone_append": false, 00:16:14.162 "compare": false, 00:16:14.162 "compare_and_write": false, 00:16:14.162 "abort": false, 00:16:14.162 "seek_hole": false, 00:16:14.162 "seek_data": false, 00:16:14.162 "copy": false, 00:16:14.162 "nvme_iov_md": false 00:16:14.162 }, 00:16:14.162 "driver_specific": { 00:16:14.162 "ftl": { 00:16:14.162 "base_bdev": "59fae9a1-282b-4ab5-8fe9-ec7b1449efa1", 00:16:14.162 "cache": "nvc0n1p0" 00:16:14.162 } 00:16:14.162 } 00:16:14.162 } 00:16:14.162 ] 00:16:14.162 06:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # return 0 00:16:14.162 06:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:16:14.162 06:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:14.420 06:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:16:14.420 06:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:14.681 [2024-07-13 06:03:06.155153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.681 [2024-07-13 06:03:06.155245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:14.681 [2024-07-13 06:03:06.155267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:14.681 [2024-07-13 06:03:06.155283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.681 [2024-07-13 06:03:06.155329] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:14.681 [2024-07-13 06:03:06.155774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.681 [2024-07-13 06:03:06.155796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:14.681 [2024-07-13 06:03:06.155818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:16:14.681 [2024-07-13 06:03:06.155831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.681 [2024-07-13 06:03:06.156301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.681 [2024-07-13 06:03:06.156330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:14.681 [2024-07-13 06:03:06.156350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:16:14.681 [2024-07-13 06:03:06.156362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.681 [2024-07-13 06:03:06.159719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.681 [2024-07-13 06:03:06.159764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:14.681 [2024-07-13 06:03:06.159817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.323 ms 00:16:14.681 [2024-07-13 06:03:06.159830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.681 [2024-07-13 06:03:06.166368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.681 [2024-07-13 06:03:06.166428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:14.681 [2024-07-13 06:03:06.166468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.498 ms 00:16:14.681 [2024-07-13 06:03:06.166482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.681 [2024-07-13 06:03:06.168111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.681 [2024-07-13 06:03:06.168176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:14.681 [2024-07-13 06:03:06.168215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.501 ms 00:16:14.681 [2024-07-13 06:03:06.168229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.681 [2024-07-13 06:03:06.172220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.681 [2024-07-13 06:03:06.172276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:14.681 [2024-07-13 06:03:06.172317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.921 ms 00:16:14.681 [2024-07-13 06:03:06.172331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.681 [2024-07-13 06:03:06.172508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.681 [2024-07-13 06:03:06.172529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:14.681 [2024-07-13 06:03:06.172546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:16:14.681 [2024-07-13 06:03:06.172559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.681 [2024-07-13 06:03:06.174174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.681 [2024-07-13 06:03:06.174226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:14.681 [2024-07-13 06:03:06.174246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.549 ms 00:16:14.681 [2024-07-13 06:03:06.174259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.681 [2024-07-13 06:03:06.175771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.681 [2024-07-13 06:03:06.175812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:14.681 [2024-07-13 06:03:06.175833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.458 ms 00:16:14.681 [2024-07-13 06:03:06.175847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.681 [2024-07-13 06:03:06.176946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.681 [2024-07-13 06:03:06.177001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:14.681 [2024-07-13 06:03:06.177037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.046 ms 00:16:14.681 [2024-07-13 06:03:06.177049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.681 [2024-07-13 06:03:06.178238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.681 [2024-07-13 06:03:06.178277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:14.681 [2024-07-13 06:03:06.178296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.073 ms 00:16:14.681 [2024-07-13 06:03:06.178309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.681 [2024-07-13 06:03:06.178366] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:14.681 [2024-07-13 06:03:06.178389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:14.681 [2024-07-13 06:03:06.178830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.178843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.178857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.178870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.178885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.178898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.178912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.178925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.178942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.178954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.178969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.178982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:14.682 [2024-07-13 06:03:06.179859] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:14.682 [2024-07-13 06:03:06.179876] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1d1887a0-2646-4779-9036-7e2a3e28e62a 00:16:14.682 [2024-07-13 06:03:06.179892] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:14.682 [2024-07-13 06:03:06.179906] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:14.682 [2024-07-13 06:03:06.179918] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:14.682 [2024-07-13 06:03:06.179934] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:14.682 [2024-07-13 06:03:06.179946] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:14.682 [2024-07-13 06:03:06.179961] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:14.682 [2024-07-13 06:03:06.179973] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:14.682 [2024-07-13 06:03:06.179986] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:14.682 [2024-07-13 06:03:06.179997] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:14.682 [2024-07-13 06:03:06.180011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.682 [2024-07-13 06:03:06.180023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:14.682 [2024-07-13 06:03:06.180050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.650 ms 00:16:14.682 [2024-07-13 06:03:06.180065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.682 [2024-07-13 06:03:06.181661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.682 [2024-07-13 06:03:06.181709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:14.682 [2024-07-13 06:03:06.181732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.554 ms 00:16:14.682 [2024-07-13 06:03:06.181745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.682 [2024-07-13 06:03:06.181861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.682 [2024-07-13 06:03:06.181881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:14.682 [2024-07-13 06:03:06.181897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:16:14.682 [2024-07-13 06:03:06.181913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.682 [2024-07-13 06:03:06.187474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.682 [2024-07-13 06:03:06.187533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:14.682 [2024-07-13 06:03:06.187569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.682 [2024-07-13 06:03:06.187598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.682 [2024-07-13 06:03:06.187692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.682 [2024-07-13 06:03:06.187709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:14.682 [2024-07-13 06:03:06.187753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.682 [2024-07-13 06:03:06.187768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.682 [2024-07-13 06:03:06.187891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.682 [2024-07-13 06:03:06.187913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:14.683 [2024-07-13 06:03:06.187931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.683 [2024-07-13 06:03:06.187944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.683 [2024-07-13 06:03:06.187991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.683 [2024-07-13 06:03:06.188006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:14.683 [2024-07-13 06:03:06.188021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.683 [2024-07-13 06:03:06.188034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.683 [2024-07-13 06:03:06.196765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.683 [2024-07-13 06:03:06.196845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:14.683 [2024-07-13 06:03:06.196885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.683 [2024-07-13 06:03:06.196899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.683 [2024-07-13 06:03:06.203765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.683 [2024-07-13 06:03:06.203832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:14.683 [2024-07-13 06:03:06.203870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.683 [2024-07-13 06:03:06.203883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.683 [2024-07-13 06:03:06.203978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.683 [2024-07-13 06:03:06.204013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:14.683 [2024-07-13 06:03:06.204031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.683 [2024-07-13 06:03:06.204044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.683 [2024-07-13 06:03:06.204144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.683 [2024-07-13 06:03:06.204190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:14.683 [2024-07-13 06:03:06.204208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.683 [2024-07-13 06:03:06.204221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.683 [2024-07-13 06:03:06.204361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.683 [2024-07-13 06:03:06.204383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:14.683 [2024-07-13 06:03:06.204400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.683 [2024-07-13 06:03:06.204413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.683 [2024-07-13 06:03:06.204491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.683 [2024-07-13 06:03:06.204511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:14.683 [2024-07-13 06:03:06.204527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.683 [2024-07-13 06:03:06.204539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.683 [2024-07-13 06:03:06.204601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.683 [2024-07-13 06:03:06.204621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:14.683 [2024-07-13 06:03:06.204638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.683 [2024-07-13 06:03:06.204651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.683 [2024-07-13 06:03:06.204716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.683 [2024-07-13 06:03:06.204752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:14.683 [2024-07-13 06:03:06.204770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.683 [2024-07-13 06:03:06.204782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.683 [2024-07-13 06:03:06.204982] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 49.803 ms, result 0 00:16:14.683 true 00:16:14.683 06:03:06 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 89342 00:16:14.683 06:03:06 ftl.ftl_fio_basic -- common/autotest_common.sh@948 -- # '[' -z 89342 ']' 00:16:14.683 06:03:06 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # kill -0 89342 00:16:14.683 06:03:06 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # uname 00:16:14.683 06:03:06 ftl.ftl_fio_basic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:14.683 06:03:06 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89342 00:16:14.683 06:03:06 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:14.683 killing process with pid 89342 00:16:14.683 06:03:06 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:14.683 06:03:06 ftl.ftl_fio_basic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89342' 00:16:14.683 06:03:06 ftl.ftl_fio_basic -- common/autotest_common.sh@967 -- # kill 89342 00:16:14.683 06:03:06 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # wait 89342 00:16:17.223 06:03:08 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:16:17.223 06:03:08 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:17.223 06:03:08 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:16:17.223 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:17.223 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:17.223 06:03:08 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:17.223 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:17.223 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:17.223 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:17.223 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:17.223 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:17.223 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:17.223 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:17.223 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:17.223 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:17.223 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:17.223 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:17.223 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:17.223 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:17.223 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:17.223 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:17.223 06:03:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:17.481 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:16:17.481 fio-3.35 00:16:17.481 Starting 1 thread 00:16:22.746 00:16:22.746 test: (groupid=0, jobs=1): err= 0: pid=89504: Sat Jul 13 06:03:13 2024 00:16:22.746 read: IOPS=919, BW=61.1MiB/s (64.0MB/s)(255MiB/4168msec) 00:16:22.746 slat (nsec): min=5545, max=30915, avg=7405.93, stdev=2970.73 00:16:22.746 clat (usec): min=339, max=1091, avg=484.13, stdev=50.92 00:16:22.746 lat (usec): min=351, max=1098, avg=491.54, stdev=51.62 00:16:22.746 clat percentiles (usec): 00:16:22.746 | 1.00th=[ 396], 5.00th=[ 433], 10.00th=[ 441], 20.00th=[ 449], 00:16:22.746 | 30.00th=[ 457], 40.00th=[ 465], 50.00th=[ 474], 60.00th=[ 482], 00:16:22.746 | 70.00th=[ 498], 80.00th=[ 515], 90.00th=[ 545], 95.00th=[ 578], 00:16:22.746 | 99.00th=[ 635], 99.50th=[ 709], 99.90th=[ 1037], 99.95th=[ 1057], 00:16:22.746 | 99.99th=[ 1090] 00:16:22.746 write: IOPS=926, BW=61.5MiB/s (64.5MB/s)(256MiB/4163msec); 0 zone resets 00:16:22.746 slat (usec): min=19, max=138, avg=24.63, stdev= 5.76 00:16:22.746 clat (usec): min=359, max=1180, avg=553.71, stdev=61.65 00:16:22.746 lat (usec): min=390, max=1219, avg=578.33, stdev=62.23 00:16:22.746 clat percentiles (usec): 00:16:22.746 | 1.00th=[ 453], 5.00th=[ 474], 10.00th=[ 490], 20.00th=[ 519], 00:16:22.746 | 30.00th=[ 529], 40.00th=[ 537], 50.00th=[ 545], 60.00th=[ 553], 00:16:22.746 | 70.00th=[ 570], 80.00th=[ 586], 90.00th=[ 611], 95.00th=[ 644], 00:16:22.746 | 99.00th=[ 824], 99.50th=[ 873], 99.90th=[ 1012], 99.95th=[ 1139], 00:16:22.746 | 99.99th=[ 1188] 00:16:22.746 bw ( KiB/s): min=61336, max=64328, per=99.95%, avg=62951.00, stdev=1021.46, samples=8 00:16:22.746 iops : min= 902, max= 946, avg=925.75, stdev=15.02, samples=8 00:16:22.746 lat (usec) : 500=42.67%, 750=56.26%, 1000=0.95% 00:16:22.746 lat (msec) : 2=0.12% 00:16:22.747 cpu : usr=98.20%, sys=0.55%, ctx=7, majf=0, minf=1326 00:16:22.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:22.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.747 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:22.747 00:16:22.747 Run status group 0 (all jobs): 00:16:22.747 READ: bw=61.1MiB/s (64.0MB/s), 61.1MiB/s-61.1MiB/s (64.0MB/s-64.0MB/s), io=255MiB (267MB), run=4168-4168msec 00:16:22.747 WRITE: bw=61.5MiB/s (64.5MB/s), 61.5MiB/s-61.5MiB/s (64.5MB/s-64.5MB/s), io=256MiB (269MB), run=4163-4163msec 00:16:22.747 ----------------------------------------------------- 00:16:22.747 Suppressions used: 00:16:22.747 count bytes template 00:16:22.747 1 5 /usr/src/fio/parse.c 00:16:22.747 1 8 libtcmalloc_minimal.so 00:16:22.747 1 904 libcrypto.so 00:16:22.747 ----------------------------------------------------- 00:16:22.747 00:16:23.005 06:03:14 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:16:23.005 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:23.005 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:23.005 06:03:14 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:23.005 06:03:14 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:16:23.005 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:23.005 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:23.005 06:03:14 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:23.005 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:23.005 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:23.005 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:23.005 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:23.005 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:23.005 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:23.005 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:23.005 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:23.005 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:23.005 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:23.005 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:23.005 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:23.005 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:23.005 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:23.005 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:23.005 06:03:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:23.263 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:23.263 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:23.263 fio-3.35 00:16:23.263 Starting 2 threads 00:16:55.331 00:16:55.332 first_half: (groupid=0, jobs=1): err= 0: pid=89598: Sat Jul 13 06:03:43 2024 00:16:55.332 read: IOPS=2311, BW=9245KiB/s (9467kB/s)(255MiB/28257msec) 00:16:55.332 slat (usec): min=4, max=351, avg= 7.35, stdev= 3.44 00:16:55.332 clat (usec): min=982, max=318965, avg=43897.50, stdev=20598.04 00:16:55.332 lat (usec): min=990, max=318971, avg=43904.85, stdev=20598.26 00:16:55.332 clat percentiles (msec): 00:16:55.332 | 1.00th=[ 17], 5.00th=[ 37], 10.00th=[ 38], 20.00th=[ 38], 00:16:55.332 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 40], 00:16:55.332 | 70.00th=[ 41], 80.00th=[ 45], 90.00th=[ 50], 95.00th=[ 68], 00:16:55.332 | 99.00th=[ 159], 99.50th=[ 182], 99.90th=[ 232], 99.95th=[ 247], 00:16:55.332 | 99.99th=[ 309] 00:16:55.332 write: IOPS=2799, BW=10.9MiB/s (11.5MB/s)(256MiB/23413msec); 0 zone resets 00:16:55.332 slat (usec): min=5, max=1019, avg= 9.23, stdev= 7.90 00:16:55.332 clat (usec): min=453, max=108139, avg=11410.00, stdev=19082.47 00:16:55.332 lat (usec): min=466, max=108146, avg=11419.23, stdev=19082.57 00:16:55.332 clat percentiles (usec): 00:16:55.332 | 1.00th=[ 988], 5.00th=[ 1319], 10.00th=[ 1532], 20.00th=[ 2057], 00:16:55.332 | 30.00th=[ 3752], 40.00th=[ 5342], 50.00th=[ 6456], 60.00th=[ 7308], 00:16:55.332 | 70.00th=[ 8586], 80.00th=[ 12649], 90.00th=[ 16450], 95.00th=[ 46400], 00:16:55.332 | 99.00th=[ 98042], 99.50th=[101188], 99.90th=[103285], 99.95th=[105382], 00:16:55.332 | 99.99th=[107480] 00:16:55.332 bw ( KiB/s): min= 3760, max=39880, per=100.00%, avg=22791.70, stdev=10206.56, samples=23 00:16:55.332 iops : min= 940, max= 9970, avg=5697.91, stdev=2551.65, samples=23 00:16:55.332 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.49% 00:16:55.332 lat (msec) : 2=9.25%, 4=6.25%, 10=22.03%, 20=8.63%, 50=45.96% 00:16:55.332 lat (msec) : 100=5.71%, 250=1.60%, 500=0.02% 00:16:55.332 cpu : usr=98.41%, sys=0.50%, ctx=73, majf=0, minf=5537 00:16:55.332 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:55.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.332 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:55.332 issued rwts: total=65309,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.332 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:55.332 second_half: (groupid=0, jobs=1): err= 0: pid=89599: Sat Jul 13 06:03:43 2024 00:16:55.332 read: IOPS=2292, BW=9172KiB/s (9392kB/s)(255MiB/28485msec) 00:16:55.332 slat (nsec): min=4617, max=48882, avg=7212.08, stdev=1764.60 00:16:55.332 clat (usec): min=1104, max=324145, avg=43278.45, stdev=24732.82 00:16:55.332 lat (usec): min=1113, max=324152, avg=43285.66, stdev=24733.02 00:16:55.332 clat percentiles (msec): 00:16:55.332 | 1.00th=[ 12], 5.00th=[ 35], 10.00th=[ 38], 20.00th=[ 38], 00:16:55.332 | 30.00th=[ 39], 40.00th=[ 39], 50.00th=[ 39], 60.00th=[ 40], 00:16:55.332 | 70.00th=[ 40], 80.00th=[ 43], 90.00th=[ 46], 95.00th=[ 60], 00:16:55.332 | 99.00th=[ 184], 99.50th=[ 209], 99.90th=[ 253], 99.95th=[ 288], 00:16:55.332 | 99.99th=[ 317] 00:16:55.332 write: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(256MiB/25802msec); 0 zone resets 00:16:55.332 slat (usec): min=5, max=566, avg= 9.13, stdev= 5.37 00:16:55.332 clat (usec): min=495, max=107758, avg=12477.11, stdev=20841.84 00:16:55.332 lat (usec): min=503, max=107770, avg=12486.24, stdev=20842.02 00:16:55.332 clat percentiles (usec): 00:16:55.332 | 1.00th=[ 979], 5.00th=[ 1303], 10.00th=[ 1500], 20.00th=[ 1811], 00:16:55.332 | 30.00th=[ 2180], 40.00th=[ 3490], 50.00th=[ 4883], 60.00th=[ 6587], 00:16:55.332 | 70.00th=[ 8717], 80.00th=[ 13829], 90.00th=[ 38536], 95.00th=[ 58983], 00:16:55.332 | 99.00th=[ 99091], 99.50th=[101188], 99.90th=[104334], 99.95th=[105382], 00:16:55.332 | 99.99th=[107480] 00:16:55.332 bw ( KiB/s): min= 1664, max=52432, per=99.23%, avg=20164.92, stdev=14242.41, samples=26 00:16:55.332 iops : min= 416, max=13108, avg=5041.23, stdev=3560.60, samples=26 00:16:55.332 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.54% 00:16:55.332 lat (msec) : 2=12.45%, 4=9.61%, 10=13.59%, 20=9.40%, 50=48.34% 00:16:55.332 lat (msec) : 100=3.93%, 250=2.03%, 500=0.05% 00:16:55.332 cpu : usr=99.10%, sys=0.22%, ctx=60, majf=0, minf=5597 00:16:55.332 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:55.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.332 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:55.332 issued rwts: total=65313,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.332 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:55.332 00:16:55.332 Run status group 0 (all jobs): 00:16:55.332 READ: bw=17.9MiB/s (18.8MB/s), 9172KiB/s-9245KiB/s (9392kB/s-9467kB/s), io=510MiB (535MB), run=28257-28485msec 00:16:55.332 WRITE: bw=19.8MiB/s (20.8MB/s), 9.92MiB/s-10.9MiB/s (10.4MB/s-11.5MB/s), io=512MiB (537MB), run=23413-25802msec 00:16:55.332 ----------------------------------------------------- 00:16:55.332 Suppressions used: 00:16:55.332 count bytes template 00:16:55.332 2 10 /usr/src/fio/parse.c 00:16:55.332 4 384 /usr/src/fio/iolog.c 00:16:55.332 1 8 libtcmalloc_minimal.so 00:16:55.332 1 904 libcrypto.so 00:16:55.332 ----------------------------------------------------- 00:16:55.332 00:16:55.332 06:03:44 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:55.332 06:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:55.332 06:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:55.332 06:03:44 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:55.332 06:03:44 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:55.332 06:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:55.332 06:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:55.332 06:03:44 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:55.332 06:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:55.332 06:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:55.332 06:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:55.332 06:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:55.332 06:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:55.332 06:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:55.332 06:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:55.332 06:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:55.332 06:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:55.332 06:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:55.332 06:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:55.332 06:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:55.332 06:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:55.332 06:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:55.332 06:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:55.332 06:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:55.332 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:55.332 fio-3.35 00:16:55.332 Starting 1 thread 00:17:10.207 00:17:10.207 test: (groupid=0, jobs=1): err= 0: pid=89947: Sat Jul 13 06:04:01 2024 00:17:10.207 read: IOPS=6478, BW=25.3MiB/s (26.5MB/s)(255MiB/10065msec) 00:17:10.207 slat (nsec): min=4392, max=54751, avg=6676.13, stdev=1959.88 00:17:10.207 clat (usec): min=743, max=38742, avg=19747.24, stdev=1001.87 00:17:10.207 lat (usec): min=748, max=38747, avg=19753.91, stdev=1001.91 00:17:10.207 clat percentiles (usec): 00:17:10.207 | 1.00th=[18744], 5.00th=[19006], 10.00th=[19006], 20.00th=[19268], 00:17:10.207 | 30.00th=[19268], 40.00th=[19530], 50.00th=[19530], 60.00th=[19792], 00:17:10.207 | 70.00th=[19792], 80.00th=[20055], 90.00th=[20579], 95.00th=[21365], 00:17:10.207 | 99.00th=[22676], 99.50th=[22938], 99.90th=[28967], 99.95th=[33817], 00:17:10.207 | 99.99th=[38011] 00:17:10.207 write: IOPS=11.9k, BW=46.4MiB/s (48.6MB/s)(256MiB/5518msec); 0 zone resets 00:17:10.207 slat (usec): min=5, max=144, avg= 9.12, stdev= 4.52 00:17:10.207 clat (usec): min=649, max=61191, avg=10716.19, stdev=13356.20 00:17:10.207 lat (usec): min=657, max=61200, avg=10725.31, stdev=13356.21 00:17:10.207 clat percentiles (usec): 00:17:10.207 | 1.00th=[ 922], 5.00th=[ 1123], 10.00th=[ 1254], 20.00th=[ 1418], 00:17:10.207 | 30.00th=[ 1614], 40.00th=[ 2089], 50.00th=[ 7111], 60.00th=[ 8225], 00:17:10.207 | 70.00th=[ 9503], 80.00th=[11207], 90.00th=[38536], 95.00th=[41157], 00:17:10.207 | 99.00th=[45876], 99.50th=[47449], 99.90th=[52167], 99.95th=[53216], 00:17:10.207 | 99.99th=[56361] 00:17:10.207 bw ( KiB/s): min= 1016, max=64576, per=91.97%, avg=43690.67, stdev=15974.40, samples=12 00:17:10.207 iops : min= 254, max=16144, avg=10922.67, stdev=3993.60, samples=12 00:17:10.207 lat (usec) : 750=0.02%, 1000=1.00% 00:17:10.207 lat (msec) : 2=18.69%, 4=1.21%, 10=15.80%, 20=42.98%, 50=20.19% 00:17:10.207 lat (msec) : 100=0.10% 00:17:10.207 cpu : usr=98.89%, sys=0.33%, ctx=23, majf=0, minf=5577 00:17:10.207 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:10.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.207 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:10.207 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.207 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:10.207 00:17:10.207 Run status group 0 (all jobs): 00:17:10.207 READ: bw=25.3MiB/s (26.5MB/s), 25.3MiB/s-25.3MiB/s (26.5MB/s-26.5MB/s), io=255MiB (267MB), run=10065-10065msec 00:17:10.207 WRITE: bw=46.4MiB/s (48.6MB/s), 46.4MiB/s-46.4MiB/s (48.6MB/s-48.6MB/s), io=256MiB (268MB), run=5518-5518msec 00:17:10.774 ----------------------------------------------------- 00:17:10.774 Suppressions used: 00:17:10.774 count bytes template 00:17:10.774 1 5 /usr/src/fio/parse.c 00:17:10.774 2 192 /usr/src/fio/iolog.c 00:17:10.774 1 8 libtcmalloc_minimal.so 00:17:10.774 1 904 libcrypto.so 00:17:10.774 ----------------------------------------------------- 00:17:10.774 00:17:10.774 06:04:02 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:17:10.774 06:04:02 ftl.ftl_fio_basic -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:10.774 06:04:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:10.774 06:04:02 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:10.774 06:04:02 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:17:10.774 06:04:02 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:10.774 Remove shared memory files 00:17:10.774 06:04:02 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:17:10.774 06:04:02 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:17:10.774 06:04:02 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid73995 /dev/shm/spdk_tgt_trace.pid88322 00:17:10.774 06:04:02 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:10.774 06:04:02 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:17:10.774 ************************************ 00:17:10.774 END TEST ftl_fio_basic 00:17:10.774 ************************************ 00:17:10.774 00:17:10.774 real 1m3.892s 00:17:10.774 user 2m26.315s 00:17:10.774 sys 0m3.431s 00:17:10.774 06:04:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:10.774 06:04:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:10.774 06:04:02 ftl -- common/autotest_common.sh@1142 -- # return 0 00:17:10.774 06:04:02 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:10.774 06:04:02 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:10.774 06:04:02 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:10.774 06:04:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:10.774 ************************************ 00:17:10.774 START TEST ftl_bdevperf 00:17:10.774 ************************************ 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:10.774 * Looking for test storage... 00:17:10.774 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:10.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=90185 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 90185 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 90185 ']' 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:10.774 06:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:11.032 [2024-07-13 06:04:02.560484] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:11.032 [2024-07-13 06:04:02.560870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90185 ] 00:17:11.032 [2024-07-13 06:04:02.702338] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.032 [2024-07-13 06:04:02.737884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.290 06:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:11.290 06:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:17:11.290 06:04:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:11.290 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:17:11.290 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:11.290 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:17:11.290 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:17:11.290 06:04:02 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:11.548 06:04:03 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:11.548 06:04:03 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:17:11.548 06:04:03 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:11.548 06:04:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:11.548 06:04:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:11.548 06:04:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:11.548 06:04:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:11.548 06:04:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:11.807 06:04:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:11.807 { 00:17:11.807 "name": "nvme0n1", 00:17:11.807 "aliases": [ 00:17:11.807 "dab4b5ef-75f0-43b0-9788-2c1c7499f8f5" 00:17:11.807 ], 00:17:11.807 "product_name": "NVMe disk", 00:17:11.807 "block_size": 4096, 00:17:11.807 "num_blocks": 1310720, 00:17:11.807 "uuid": "dab4b5ef-75f0-43b0-9788-2c1c7499f8f5", 00:17:11.807 "assigned_rate_limits": { 00:17:11.807 "rw_ios_per_sec": 0, 00:17:11.807 "rw_mbytes_per_sec": 0, 00:17:11.807 "r_mbytes_per_sec": 0, 00:17:11.807 "w_mbytes_per_sec": 0 00:17:11.807 }, 00:17:11.807 "claimed": true, 00:17:11.807 "claim_type": "read_many_write_one", 00:17:11.807 "zoned": false, 00:17:11.807 "supported_io_types": { 00:17:11.807 "read": true, 00:17:11.807 "write": true, 00:17:11.807 "unmap": true, 00:17:11.807 "flush": true, 00:17:11.807 "reset": true, 00:17:11.807 "nvme_admin": true, 00:17:11.807 "nvme_io": true, 00:17:11.807 "nvme_io_md": false, 00:17:11.807 "write_zeroes": true, 00:17:11.807 "zcopy": false, 00:17:11.807 "get_zone_info": false, 00:17:11.807 "zone_management": false, 00:17:11.807 "zone_append": false, 00:17:11.807 "compare": true, 00:17:11.807 "compare_and_write": false, 00:17:11.807 "abort": true, 00:17:11.807 "seek_hole": false, 00:17:11.807 "seek_data": false, 00:17:11.807 "copy": true, 00:17:11.807 "nvme_iov_md": false 00:17:11.807 }, 00:17:11.807 "driver_specific": { 00:17:11.807 "nvme": [ 00:17:11.807 { 00:17:11.807 "pci_address": "0000:00:11.0", 00:17:11.807 "trid": { 00:17:11.807 "trtype": "PCIe", 00:17:11.807 "traddr": "0000:00:11.0" 00:17:11.807 }, 00:17:11.807 "ctrlr_data": { 00:17:11.807 "cntlid": 0, 00:17:11.807 "vendor_id": "0x1b36", 00:17:11.807 "model_number": "QEMU NVMe Ctrl", 00:17:11.807 "serial_number": "12341", 00:17:11.807 "firmware_revision": "8.0.0", 00:17:11.807 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:11.807 "oacs": { 00:17:11.807 "security": 0, 00:17:11.807 "format": 1, 00:17:11.807 "firmware": 0, 00:17:11.807 "ns_manage": 1 00:17:11.807 }, 00:17:11.807 "multi_ctrlr": false, 00:17:11.807 "ana_reporting": false 00:17:11.807 }, 00:17:11.807 "vs": { 00:17:11.808 "nvme_version": "1.4" 00:17:11.808 }, 00:17:11.808 "ns_data": { 00:17:11.808 "id": 1, 00:17:11.808 "can_share": false 00:17:11.808 } 00:17:11.808 } 00:17:11.808 ], 00:17:11.808 "mp_policy": "active_passive" 00:17:11.808 } 00:17:11.808 } 00:17:11.808 ]' 00:17:11.808 06:04:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:11.808 06:04:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:11.808 06:04:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:11.808 06:04:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:11.808 06:04:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:11.808 06:04:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:17:11.808 06:04:03 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:17:11.808 06:04:03 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:11.808 06:04:03 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:17:11.808 06:04:03 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:11.808 06:04:03 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:12.066 06:04:03 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=1977d6ee-f77f-4b2c-93df-1de307ca552c 00:17:12.066 06:04:03 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:17:12.066 06:04:03 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1977d6ee-f77f-4b2c-93df-1de307ca552c 00:17:12.632 06:04:04 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:12.632 06:04:04 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=09c19a18-76f8-4960-b0f2-9abc8a62f31d 00:17:12.632 06:04:04 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 09c19a18-76f8-4960-b0f2-9abc8a62f31d 00:17:12.890 06:04:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=2557e08b-abec-48ab-86b6-5e116a406b60 00:17:12.890 06:04:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2557e08b-abec-48ab-86b6-5e116a406b60 00:17:12.890 06:04:04 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:17:12.890 06:04:04 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:12.890 06:04:04 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=2557e08b-abec-48ab-86b6-5e116a406b60 00:17:12.890 06:04:04 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:17:12.890 06:04:04 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 2557e08b-abec-48ab-86b6-5e116a406b60 00:17:12.890 06:04:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=2557e08b-abec-48ab-86b6-5e116a406b60 00:17:12.890 06:04:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:12.890 06:04:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:12.890 06:04:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:12.890 06:04:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2557e08b-abec-48ab-86b6-5e116a406b60 00:17:13.148 06:04:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:13.148 { 00:17:13.148 "name": "2557e08b-abec-48ab-86b6-5e116a406b60", 00:17:13.148 "aliases": [ 00:17:13.148 "lvs/nvme0n1p0" 00:17:13.148 ], 00:17:13.148 "product_name": "Logical Volume", 00:17:13.148 "block_size": 4096, 00:17:13.148 "num_blocks": 26476544, 00:17:13.148 "uuid": "2557e08b-abec-48ab-86b6-5e116a406b60", 00:17:13.148 "assigned_rate_limits": { 00:17:13.148 "rw_ios_per_sec": 0, 00:17:13.148 "rw_mbytes_per_sec": 0, 00:17:13.148 "r_mbytes_per_sec": 0, 00:17:13.148 "w_mbytes_per_sec": 0 00:17:13.148 }, 00:17:13.148 "claimed": false, 00:17:13.148 "zoned": false, 00:17:13.148 "supported_io_types": { 00:17:13.148 "read": true, 00:17:13.148 "write": true, 00:17:13.148 "unmap": true, 00:17:13.148 "flush": false, 00:17:13.148 "reset": true, 00:17:13.148 "nvme_admin": false, 00:17:13.148 "nvme_io": false, 00:17:13.148 "nvme_io_md": false, 00:17:13.148 "write_zeroes": true, 00:17:13.148 "zcopy": false, 00:17:13.148 "get_zone_info": false, 00:17:13.148 "zone_management": false, 00:17:13.148 "zone_append": false, 00:17:13.148 "compare": false, 00:17:13.148 "compare_and_write": false, 00:17:13.148 "abort": false, 00:17:13.148 "seek_hole": true, 00:17:13.148 "seek_data": true, 00:17:13.148 "copy": false, 00:17:13.148 "nvme_iov_md": false 00:17:13.148 }, 00:17:13.148 "driver_specific": { 00:17:13.148 "lvol": { 00:17:13.148 "lvol_store_uuid": "09c19a18-76f8-4960-b0f2-9abc8a62f31d", 00:17:13.148 "base_bdev": "nvme0n1", 00:17:13.148 "thin_provision": true, 00:17:13.148 "num_allocated_clusters": 0, 00:17:13.148 "snapshot": false, 00:17:13.148 "clone": false, 00:17:13.148 "esnap_clone": false 00:17:13.148 } 00:17:13.148 } 00:17:13.148 } 00:17:13.148 ]' 00:17:13.148 06:04:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:13.406 06:04:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:13.406 06:04:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:13.406 06:04:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:13.406 06:04:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:13.406 06:04:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:17:13.406 06:04:04 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:17:13.406 06:04:04 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:17:13.406 06:04:04 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:13.664 06:04:05 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:13.664 06:04:05 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:13.664 06:04:05 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 2557e08b-abec-48ab-86b6-5e116a406b60 00:17:13.664 06:04:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=2557e08b-abec-48ab-86b6-5e116a406b60 00:17:13.664 06:04:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:13.664 06:04:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:13.664 06:04:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:13.664 06:04:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2557e08b-abec-48ab-86b6-5e116a406b60 00:17:13.922 06:04:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:13.922 { 00:17:13.922 "name": "2557e08b-abec-48ab-86b6-5e116a406b60", 00:17:13.923 "aliases": [ 00:17:13.923 "lvs/nvme0n1p0" 00:17:13.923 ], 00:17:13.923 "product_name": "Logical Volume", 00:17:13.923 "block_size": 4096, 00:17:13.923 "num_blocks": 26476544, 00:17:13.923 "uuid": "2557e08b-abec-48ab-86b6-5e116a406b60", 00:17:13.923 "assigned_rate_limits": { 00:17:13.923 "rw_ios_per_sec": 0, 00:17:13.923 "rw_mbytes_per_sec": 0, 00:17:13.923 "r_mbytes_per_sec": 0, 00:17:13.923 "w_mbytes_per_sec": 0 00:17:13.923 }, 00:17:13.923 "claimed": false, 00:17:13.923 "zoned": false, 00:17:13.923 "supported_io_types": { 00:17:13.923 "read": true, 00:17:13.923 "write": true, 00:17:13.923 "unmap": true, 00:17:13.923 "flush": false, 00:17:13.923 "reset": true, 00:17:13.923 "nvme_admin": false, 00:17:13.923 "nvme_io": false, 00:17:13.923 "nvme_io_md": false, 00:17:13.923 "write_zeroes": true, 00:17:13.923 "zcopy": false, 00:17:13.923 "get_zone_info": false, 00:17:13.923 "zone_management": false, 00:17:13.923 "zone_append": false, 00:17:13.923 "compare": false, 00:17:13.923 "compare_and_write": false, 00:17:13.923 "abort": false, 00:17:13.923 "seek_hole": true, 00:17:13.923 "seek_data": true, 00:17:13.923 "copy": false, 00:17:13.923 "nvme_iov_md": false 00:17:13.923 }, 00:17:13.923 "driver_specific": { 00:17:13.923 "lvol": { 00:17:13.923 "lvol_store_uuid": "09c19a18-76f8-4960-b0f2-9abc8a62f31d", 00:17:13.923 "base_bdev": "nvme0n1", 00:17:13.923 "thin_provision": true, 00:17:13.923 "num_allocated_clusters": 0, 00:17:13.923 "snapshot": false, 00:17:13.923 "clone": false, 00:17:13.923 "esnap_clone": false 00:17:13.923 } 00:17:13.923 } 00:17:13.923 } 00:17:13.923 ]' 00:17:13.923 06:04:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:13.923 06:04:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:13.923 06:04:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:13.923 06:04:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:13.923 06:04:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:13.923 06:04:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:17:13.923 06:04:05 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:17:13.923 06:04:05 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:14.182 06:04:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:17:14.182 06:04:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size 2557e08b-abec-48ab-86b6-5e116a406b60 00:17:14.182 06:04:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=2557e08b-abec-48ab-86b6-5e116a406b60 00:17:14.182 06:04:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:14.182 06:04:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:14.182 06:04:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:14.182 06:04:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2557e08b-abec-48ab-86b6-5e116a406b60 00:17:14.441 06:04:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:14.441 { 00:17:14.441 "name": "2557e08b-abec-48ab-86b6-5e116a406b60", 00:17:14.441 "aliases": [ 00:17:14.441 "lvs/nvme0n1p0" 00:17:14.441 ], 00:17:14.441 "product_name": "Logical Volume", 00:17:14.441 "block_size": 4096, 00:17:14.441 "num_blocks": 26476544, 00:17:14.441 "uuid": "2557e08b-abec-48ab-86b6-5e116a406b60", 00:17:14.441 "assigned_rate_limits": { 00:17:14.441 "rw_ios_per_sec": 0, 00:17:14.441 "rw_mbytes_per_sec": 0, 00:17:14.441 "r_mbytes_per_sec": 0, 00:17:14.441 "w_mbytes_per_sec": 0 00:17:14.441 }, 00:17:14.441 "claimed": false, 00:17:14.441 "zoned": false, 00:17:14.441 "supported_io_types": { 00:17:14.441 "read": true, 00:17:14.441 "write": true, 00:17:14.441 "unmap": true, 00:17:14.441 "flush": false, 00:17:14.441 "reset": true, 00:17:14.441 "nvme_admin": false, 00:17:14.441 "nvme_io": false, 00:17:14.441 "nvme_io_md": false, 00:17:14.441 "write_zeroes": true, 00:17:14.441 "zcopy": false, 00:17:14.441 "get_zone_info": false, 00:17:14.441 "zone_management": false, 00:17:14.441 "zone_append": false, 00:17:14.441 "compare": false, 00:17:14.441 "compare_and_write": false, 00:17:14.441 "abort": false, 00:17:14.441 "seek_hole": true, 00:17:14.441 "seek_data": true, 00:17:14.441 "copy": false, 00:17:14.441 "nvme_iov_md": false 00:17:14.441 }, 00:17:14.441 "driver_specific": { 00:17:14.441 "lvol": { 00:17:14.441 "lvol_store_uuid": "09c19a18-76f8-4960-b0f2-9abc8a62f31d", 00:17:14.441 "base_bdev": "nvme0n1", 00:17:14.441 "thin_provision": true, 00:17:14.441 "num_allocated_clusters": 0, 00:17:14.441 "snapshot": false, 00:17:14.441 "clone": false, 00:17:14.441 "esnap_clone": false 00:17:14.441 } 00:17:14.441 } 00:17:14.441 } 00:17:14.441 ]' 00:17:14.441 06:04:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:14.441 06:04:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:14.441 06:04:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:14.700 06:04:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:14.700 06:04:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:14.700 06:04:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:17:14.700 06:04:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:17:14.700 06:04:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2557e08b-abec-48ab-86b6-5e116a406b60 -c nvc0n1p0 --l2p_dram_limit 20 00:17:14.960 [2024-07-13 06:04:06.444087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.960 [2024-07-13 06:04:06.444201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:14.960 [2024-07-13 06:04:06.444233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:14.960 [2024-07-13 06:04:06.444260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.960 [2024-07-13 06:04:06.444346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.960 [2024-07-13 06:04:06.444370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:14.960 [2024-07-13 06:04:06.444388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:14.960 [2024-07-13 06:04:06.444405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.960 [2024-07-13 06:04:06.444443] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:14.960 [2024-07-13 06:04:06.444768] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:14.960 [2024-07-13 06:04:06.444803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.960 [2024-07-13 06:04:06.444820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:14.960 [2024-07-13 06:04:06.444847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:17:14.960 [2024-07-13 06:04:06.444862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.960 [2024-07-13 06:04:06.445062] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3356ec57-c021-460e-988f-55e34e87e235 00:17:14.960 [2024-07-13 06:04:06.446099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.960 [2024-07-13 06:04:06.446148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:14.960 [2024-07-13 06:04:06.446174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:14.960 [2024-07-13 06:04:06.446187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.960 [2024-07-13 06:04:06.451238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.960 [2024-07-13 06:04:06.451300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:14.960 [2024-07-13 06:04:06.451322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.995 ms 00:17:14.960 [2024-07-13 06:04:06.451334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.960 [2024-07-13 06:04:06.451454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.960 [2024-07-13 06:04:06.451472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:14.960 [2024-07-13 06:04:06.451488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:17:14.960 [2024-07-13 06:04:06.451514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.960 [2024-07-13 06:04:06.451583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.960 [2024-07-13 06:04:06.451600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:14.960 [2024-07-13 06:04:06.451615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:14.960 [2024-07-13 06:04:06.451627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.960 [2024-07-13 06:04:06.451659] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:14.960 [2024-07-13 06:04:06.453297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.960 [2024-07-13 06:04:06.453354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:14.960 [2024-07-13 06:04:06.453375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.650 ms 00:17:14.960 [2024-07-13 06:04:06.453398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.960 [2024-07-13 06:04:06.453473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.960 [2024-07-13 06:04:06.453499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:14.960 [2024-07-13 06:04:06.453513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:14.960 [2024-07-13 06:04:06.453539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.960 [2024-07-13 06:04:06.453589] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:14.960 [2024-07-13 06:04:06.453783] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:14.960 [2024-07-13 06:04:06.453821] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:14.960 [2024-07-13 06:04:06.453840] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:14.960 [2024-07-13 06:04:06.453863] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:14.960 [2024-07-13 06:04:06.453881] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:14.960 [2024-07-13 06:04:06.453896] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:14.960 [2024-07-13 06:04:06.453913] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:14.960 [2024-07-13 06:04:06.453925] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:14.960 [2024-07-13 06:04:06.453939] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:14.960 [2024-07-13 06:04:06.453952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.960 [2024-07-13 06:04:06.453965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:14.960 [2024-07-13 06:04:06.453978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:17:14.960 [2024-07-13 06:04:06.453993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.960 [2024-07-13 06:04:06.454087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.960 [2024-07-13 06:04:06.454109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:14.960 [2024-07-13 06:04:06.454122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:14.960 [2024-07-13 06:04:06.454136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.960 [2024-07-13 06:04:06.454262] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:14.960 [2024-07-13 06:04:06.454286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:14.960 [2024-07-13 06:04:06.454301] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:14.960 [2024-07-13 06:04:06.454316] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.960 [2024-07-13 06:04:06.454329] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:14.960 [2024-07-13 06:04:06.454342] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:14.960 [2024-07-13 06:04:06.454354] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:14.960 [2024-07-13 06:04:06.454368] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:14.960 [2024-07-13 06:04:06.454380] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:14.960 [2024-07-13 06:04:06.454393] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:14.960 [2024-07-13 06:04:06.454405] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:14.960 [2024-07-13 06:04:06.454418] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:14.960 [2024-07-13 06:04:06.454429] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:14.960 [2024-07-13 06:04:06.454445] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:14.960 [2024-07-13 06:04:06.454457] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:14.960 [2024-07-13 06:04:06.454470] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.961 [2024-07-13 06:04:06.454481] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:14.961 [2024-07-13 06:04:06.454494] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:14.961 [2024-07-13 06:04:06.454506] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.961 [2024-07-13 06:04:06.454520] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:14.961 [2024-07-13 06:04:06.454531] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:14.961 [2024-07-13 06:04:06.454546] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:14.961 [2024-07-13 06:04:06.454558] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:14.961 [2024-07-13 06:04:06.454571] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:14.961 [2024-07-13 06:04:06.454582] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:14.961 [2024-07-13 06:04:06.454596] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:14.961 [2024-07-13 06:04:06.454607] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:14.961 [2024-07-13 06:04:06.454622] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:14.961 [2024-07-13 06:04:06.454634] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:14.961 [2024-07-13 06:04:06.454649] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:14.961 [2024-07-13 06:04:06.454660] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:14.961 [2024-07-13 06:04:06.454674] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:14.961 [2024-07-13 06:04:06.454685] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:14.961 [2024-07-13 06:04:06.454698] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:14.961 [2024-07-13 06:04:06.454710] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:14.961 [2024-07-13 06:04:06.454723] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:14.961 [2024-07-13 06:04:06.454734] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:14.961 [2024-07-13 06:04:06.454747] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:14.961 [2024-07-13 06:04:06.454759] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:14.961 [2024-07-13 06:04:06.454772] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.961 [2024-07-13 06:04:06.454783] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:14.961 [2024-07-13 06:04:06.454797] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:14.961 [2024-07-13 06:04:06.454808] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.961 [2024-07-13 06:04:06.454821] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:14.961 [2024-07-13 06:04:06.454842] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:14.961 [2024-07-13 06:04:06.454859] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:14.961 [2024-07-13 06:04:06.454871] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:14.961 [2024-07-13 06:04:06.454886] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:14.961 [2024-07-13 06:04:06.454898] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:14.961 [2024-07-13 06:04:06.454911] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:14.961 [2024-07-13 06:04:06.454923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:14.961 [2024-07-13 06:04:06.454937] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:14.961 [2024-07-13 06:04:06.454949] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:14.961 [2024-07-13 06:04:06.454968] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:14.961 [2024-07-13 06:04:06.454986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:14.961 [2024-07-13 06:04:06.455001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:14.961 [2024-07-13 06:04:06.455013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:14.961 [2024-07-13 06:04:06.455027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:14.961 [2024-07-13 06:04:06.455039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:14.961 [2024-07-13 06:04:06.455054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:14.961 [2024-07-13 06:04:06.455066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:14.961 [2024-07-13 06:04:06.455082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:14.961 [2024-07-13 06:04:06.455094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:14.961 [2024-07-13 06:04:06.455109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:14.961 [2024-07-13 06:04:06.455121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:14.961 [2024-07-13 06:04:06.455151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:14.961 [2024-07-13 06:04:06.455165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:14.961 [2024-07-13 06:04:06.455179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:14.961 [2024-07-13 06:04:06.455192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:14.961 [2024-07-13 06:04:06.455218] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:14.961 [2024-07-13 06:04:06.455231] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:14.961 [2024-07-13 06:04:06.455247] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:14.961 [2024-07-13 06:04:06.455259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:14.961 [2024-07-13 06:04:06.455275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:14.961 [2024-07-13 06:04:06.455287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:14.961 [2024-07-13 06:04:06.455303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:14.961 [2024-07-13 06:04:06.455315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:14.961 [2024-07-13 06:04:06.455333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.107 ms 00:17:14.961 [2024-07-13 06:04:06.455344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:14.961 [2024-07-13 06:04:06.455419] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:14.961 [2024-07-13 06:04:06.455446] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:17.507 [2024-07-13 06:04:08.676155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.507 [2024-07-13 06:04:08.676259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:17.507 [2024-07-13 06:04:08.676287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2220.730 ms 00:17:17.507 [2024-07-13 06:04:08.676301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.507 [2024-07-13 06:04:08.694707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.507 [2024-07-13 06:04:08.694763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:17.507 [2024-07-13 06:04:08.694815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.298 ms 00:17:17.507 [2024-07-13 06:04:08.694828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.507 [2024-07-13 06:04:08.695012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.507 [2024-07-13 06:04:08.695040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:17.507 [2024-07-13 06:04:08.695057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:17.507 [2024-07-13 06:04:08.695069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.507 [2024-07-13 06:04:08.704442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.507 [2024-07-13 06:04:08.704502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:17.507 [2024-07-13 06:04:08.704552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.315 ms 00:17:17.507 [2024-07-13 06:04:08.704570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.507 [2024-07-13 06:04:08.704635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.507 [2024-07-13 06:04:08.704660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:17.507 [2024-07-13 06:04:08.704680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:17.507 [2024-07-13 06:04:08.704696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.507 [2024-07-13 06:04:08.705107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.507 [2024-07-13 06:04:08.705153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:17.507 [2024-07-13 06:04:08.705192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:17:17.507 [2024-07-13 06:04:08.705210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.507 [2024-07-13 06:04:08.705417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.507 [2024-07-13 06:04:08.705448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:17.507 [2024-07-13 06:04:08.705473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:17:17.507 [2024-07-13 06:04:08.705489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.507 [2024-07-13 06:04:08.711037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.507 [2024-07-13 06:04:08.711078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:17.507 [2024-07-13 06:04:08.711115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.486 ms 00:17:17.507 [2024-07-13 06:04:08.711128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.507 [2024-07-13 06:04:08.720439] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:17:17.507 [2024-07-13 06:04:08.725621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.507 [2024-07-13 06:04:08.725664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:17.507 [2024-07-13 06:04:08.725699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.385 ms 00:17:17.507 [2024-07-13 06:04:08.725715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.507 [2024-07-13 06:04:08.774153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.507 [2024-07-13 06:04:08.774251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:17.507 [2024-07-13 06:04:08.774276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.393 ms 00:17:17.507 [2024-07-13 06:04:08.774293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.507 [2024-07-13 06:04:08.774503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.507 [2024-07-13 06:04:08.774526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:17.507 [2024-07-13 06:04:08.774540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:17:17.507 [2024-07-13 06:04:08.774555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.507 [2024-07-13 06:04:08.778133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.507 [2024-07-13 06:04:08.778223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:17.507 [2024-07-13 06:04:08.778244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.519 ms 00:17:17.507 [2024-07-13 06:04:08.778262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.507 [2024-07-13 06:04:08.781133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.507 [2024-07-13 06:04:08.781212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:17.507 [2024-07-13 06:04:08.781233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.826 ms 00:17:17.507 [2024-07-13 06:04:08.781248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.507 [2024-07-13 06:04:08.781634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.507 [2024-07-13 06:04:08.781671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:17.507 [2024-07-13 06:04:08.781686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:17:17.507 [2024-07-13 06:04:08.781702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.507 [2024-07-13 06:04:08.813485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.507 [2024-07-13 06:04:08.813599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:17.507 [2024-07-13 06:04:08.813634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.757 ms 00:17:17.507 [2024-07-13 06:04:08.813650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.507 [2024-07-13 06:04:08.817929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.507 [2024-07-13 06:04:08.817990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:17.507 [2024-07-13 06:04:08.818009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.232 ms 00:17:17.507 [2024-07-13 06:04:08.818024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.507 [2024-07-13 06:04:08.821718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.507 [2024-07-13 06:04:08.821765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:17.507 [2024-07-13 06:04:08.821783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.652 ms 00:17:17.507 [2024-07-13 06:04:08.821798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.507 [2024-07-13 06:04:08.825583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.507 [2024-07-13 06:04:08.825658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:17.507 [2024-07-13 06:04:08.825676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.742 ms 00:17:17.507 [2024-07-13 06:04:08.825693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.507 [2024-07-13 06:04:08.825739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.507 [2024-07-13 06:04:08.825769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:17.507 [2024-07-13 06:04:08.825784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:17.507 [2024-07-13 06:04:08.825797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.507 [2024-07-13 06:04:08.825868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.507 [2024-07-13 06:04:08.825896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:17.507 [2024-07-13 06:04:08.825909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:17.507 [2024-07-13 06:04:08.825926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.507 [2024-07-13 06:04:08.826941] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2382.413 ms, result 0 00:17:17.507 { 00:17:17.507 "name": "ftl0", 00:17:17.507 "uuid": "3356ec57-c021-460e-988f-55e34e87e235" 00:17:17.507 } 00:17:17.507 06:04:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:17:17.507 06:04:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:17:17.507 06:04:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:17:17.507 06:04:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:17:17.765 [2024-07-13 06:04:09.237627] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:17.765 I/O size of 69632 is greater than zero copy threshold (65536). 00:17:17.765 Zero copy mechanism will not be used. 00:17:17.765 Running I/O for 4 seconds... 00:17:21.953 00:17:21.953 Latency(us) 00:17:21.953 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.953 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:17:21.953 ftl0 : 4.00 1787.56 118.70 0.00 0.00 581.68 238.31 2815.07 00:17:21.953 =================================================================================================================== 00:17:21.953 Total : 1787.56 118.70 0.00 0.00 581.68 238.31 2815.07 00:17:21.953 0 00:17:21.953 [2024-07-13 06:04:13.244793] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:21.953 06:04:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:17:21.953 [2024-07-13 06:04:13.379957] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:21.953 Running I/O for 4 seconds... 00:17:26.167 00:17:26.167 Latency(us) 00:17:26.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.167 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:17:26.167 ftl0 : 4.02 7132.75 27.86 0.00 0.00 17893.68 353.75 35508.60 00:17:26.167 =================================================================================================================== 00:17:26.167 Total : 7132.75 27.86 0.00 0.00 17893.68 0.00 35508.60 00:17:26.167 [2024-07-13 06:04:17.407486] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:26.167 0 00:17:26.167 06:04:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:17:26.167 [2024-07-13 06:04:17.542103] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:26.167 Running I/O for 4 seconds... 00:17:30.354 00:17:30.354 Latency(us) 00:17:30.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.354 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:30.354 Verification LBA range: start 0x0 length 0x1400000 00:17:30.354 ftl0 : 4.01 6129.47 23.94 0.00 0.00 20807.46 368.64 21686.46 00:17:30.354 =================================================================================================================== 00:17:30.354 Total : 6129.47 23.94 0.00 0.00 20807.46 0.00 21686.46 00:17:30.354 [2024-07-13 06:04:21.561043] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:30.354 0 00:17:30.354 06:04:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:17:30.354 [2024-07-13 06:04:21.833534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.354 [2024-07-13 06:04:21.833637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:30.354 [2024-07-13 06:04:21.833659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:30.354 [2024-07-13 06:04:21.833676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.354 [2024-07-13 06:04:21.833709] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:30.354 [2024-07-13 06:04:21.834108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.354 [2024-07-13 06:04:21.834126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:30.354 [2024-07-13 06:04:21.834141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:17:30.354 [2024-07-13 06:04:21.834152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.354 [2024-07-13 06:04:21.835775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.354 [2024-07-13 06:04:21.835848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:30.354 [2024-07-13 06:04:21.835871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.553 ms 00:17:30.354 [2024-07-13 06:04:21.835883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.354 [2024-07-13 06:04:22.014391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.354 [2024-07-13 06:04:22.014455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:30.354 [2024-07-13 06:04:22.014506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 178.474 ms 00:17:30.354 [2024-07-13 06:04:22.014520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.354 [2024-07-13 06:04:22.020942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.354 [2024-07-13 06:04:22.020981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:30.354 [2024-07-13 06:04:22.021001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.357 ms 00:17:30.354 [2024-07-13 06:04:22.021014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.354 [2024-07-13 06:04:22.022498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.354 [2024-07-13 06:04:22.022550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:30.354 [2024-07-13 06:04:22.022586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.417 ms 00:17:30.354 [2024-07-13 06:04:22.022598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.354 [2024-07-13 06:04:22.026916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.354 [2024-07-13 06:04:22.026975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:30.354 [2024-07-13 06:04:22.027005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.256 ms 00:17:30.354 [2024-07-13 06:04:22.027023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.354 [2024-07-13 06:04:22.027188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.354 [2024-07-13 06:04:22.027216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:30.354 [2024-07-13 06:04:22.027241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:17:30.354 [2024-07-13 06:04:22.027252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.354 [2024-07-13 06:04:22.029295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.354 [2024-07-13 06:04:22.029334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:30.354 [2024-07-13 06:04:22.029366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.014 ms 00:17:30.354 [2024-07-13 06:04:22.029390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.354 [2024-07-13 06:04:22.031019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.354 [2024-07-13 06:04:22.031073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:30.354 [2024-07-13 06:04:22.031091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.567 ms 00:17:30.354 [2024-07-13 06:04:22.031102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.354 [2024-07-13 06:04:22.032432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.354 [2024-07-13 06:04:22.032468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:30.354 [2024-07-13 06:04:22.032485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.247 ms 00:17:30.354 [2024-07-13 06:04:22.032497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.354 [2024-07-13 06:04:22.033721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.354 [2024-07-13 06:04:22.033774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:30.354 [2024-07-13 06:04:22.033794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.125 ms 00:17:30.354 [2024-07-13 06:04:22.033806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.354 [2024-07-13 06:04:22.033849] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:30.354 [2024-07-13 06:04:22.033872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:30.354 [2024-07-13 06:04:22.033889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:30.354 [2024-07-13 06:04:22.033916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:30.354 [2024-07-13 06:04:22.033929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:30.354 [2024-07-13 06:04:22.033956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:30.354 [2024-07-13 06:04:22.033969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:30.354 [2024-07-13 06:04:22.033980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:30.354 [2024-07-13 06:04:22.033994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:30.354 [2024-07-13 06:04:22.034005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:30.354 [2024-07-13 06:04:22.034018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:30.354 [2024-07-13 06:04:22.034030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:30.354 [2024-07-13 06:04:22.034045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:30.354 [2024-07-13 06:04:22.034056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:30.354 [2024-07-13 06:04:22.034069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:30.354 [2024-07-13 06:04:22.034080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:30.354 [2024-07-13 06:04:22.034093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:30.354 [2024-07-13 06:04:22.034104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:30.354 [2024-07-13 06:04:22.034117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:30.354 [2024-07-13 06:04:22.034129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:30.354 [2024-07-13 06:04:22.034142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:30.354 [2024-07-13 06:04:22.034153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:30.354 [2024-07-13 06:04:22.034202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:30.354 [2024-07-13 06:04:22.034216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:30.354 [2024-07-13 06:04:22.034230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.034975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.035001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.035017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.035029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.035044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.035056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.035069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.035081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.035094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.035123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.035138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.035150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.035163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.035175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.035190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.035201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.035215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.035227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.035255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.035269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.035291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.035306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.035320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.035332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.035350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.035363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.035377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:30.355 [2024-07-13 06:04:22.035399] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:30.355 [2024-07-13 06:04:22.035413] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3356ec57-c021-460e-988f-55e34e87e235 00:17:30.355 [2024-07-13 06:04:22.035426] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:30.355 [2024-07-13 06:04:22.035440] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:30.355 [2024-07-13 06:04:22.035451] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:30.355 [2024-07-13 06:04:22.035466] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:30.355 [2024-07-13 06:04:22.035478] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:30.355 [2024-07-13 06:04:22.035509] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:30.355 [2024-07-13 06:04:22.035521] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:30.355 [2024-07-13 06:04:22.035533] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:30.355 [2024-07-13 06:04:22.035544] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:30.355 [2024-07-13 06:04:22.035573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.355 [2024-07-13 06:04:22.035601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:30.355 [2024-07-13 06:04:22.035631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.728 ms 00:17:30.355 [2024-07-13 06:04:22.035644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.355 [2024-07-13 06:04:22.037087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.355 [2024-07-13 06:04:22.037113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:30.355 [2024-07-13 06:04:22.037129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.415 ms 00:17:30.355 [2024-07-13 06:04:22.037140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.355 [2024-07-13 06:04:22.037315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.355 [2024-07-13 06:04:22.037340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:30.355 [2024-07-13 06:04:22.037358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:17:30.355 [2024-07-13 06:04:22.037371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.355 [2024-07-13 06:04:22.042025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.356 [2024-07-13 06:04:22.042057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:30.356 [2024-07-13 06:04:22.042074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.356 [2024-07-13 06:04:22.042093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.356 [2024-07-13 06:04:22.042152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.356 [2024-07-13 06:04:22.042167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:30.356 [2024-07-13 06:04:22.042182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.356 [2024-07-13 06:04:22.042193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.356 [2024-07-13 06:04:22.042321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.356 [2024-07-13 06:04:22.042359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:30.356 [2024-07-13 06:04:22.042374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.356 [2024-07-13 06:04:22.042386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.356 [2024-07-13 06:04:22.042411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.356 [2024-07-13 06:04:22.042428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:30.356 [2024-07-13 06:04:22.042443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.356 [2024-07-13 06:04:22.042455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.356 [2024-07-13 06:04:22.050554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.356 [2024-07-13 06:04:22.050611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:30.356 [2024-07-13 06:04:22.050632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.356 [2024-07-13 06:04:22.050646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.356 [2024-07-13 06:04:22.057249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.356 [2024-07-13 06:04:22.057304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:30.356 [2024-07-13 06:04:22.057325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.356 [2024-07-13 06:04:22.057348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.356 [2024-07-13 06:04:22.057454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.356 [2024-07-13 06:04:22.057474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:30.356 [2024-07-13 06:04:22.057490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.356 [2024-07-13 06:04:22.057513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.356 [2024-07-13 06:04:22.057600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.356 [2024-07-13 06:04:22.057623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:30.356 [2024-07-13 06:04:22.057655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.356 [2024-07-13 06:04:22.057669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.356 [2024-07-13 06:04:22.057769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.356 [2024-07-13 06:04:22.057788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:30.356 [2024-07-13 06:04:22.057803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.356 [2024-07-13 06:04:22.057815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.356 [2024-07-13 06:04:22.057865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.356 [2024-07-13 06:04:22.057882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:30.356 [2024-07-13 06:04:22.057896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.356 [2024-07-13 06:04:22.057911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.356 [2024-07-13 06:04:22.057958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.356 [2024-07-13 06:04:22.057988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:30.356 [2024-07-13 06:04:22.058017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.356 [2024-07-13 06:04:22.058028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.356 [2024-07-13 06:04:22.058083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.356 [2024-07-13 06:04:22.058101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:30.356 [2024-07-13 06:04:22.058114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.356 [2024-07-13 06:04:22.058127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.356 [2024-07-13 06:04:22.058430] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 224.876 ms, result 0 00:17:30.356 true 00:17:30.615 06:04:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 90185 00:17:30.615 06:04:22 ftl.ftl_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 90185 ']' 00:17:30.615 06:04:22 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # kill -0 90185 00:17:30.615 06:04:22 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # uname 00:17:30.615 06:04:22 ftl.ftl_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:30.615 06:04:22 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90185 00:17:30.615 killing process with pid 90185 00:17:30.615 Received shutdown signal, test time was about 4.000000 seconds 00:17:30.615 00:17:30.615 Latency(us) 00:17:30.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.615 =================================================================================================================== 00:17:30.615 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:30.615 06:04:22 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:30.615 06:04:22 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:30.615 06:04:22 ftl.ftl_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90185' 00:17:30.615 06:04:22 ftl.ftl_bdevperf -- common/autotest_common.sh@967 -- # kill 90185 00:17:30.615 06:04:22 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # wait 90185 00:17:30.874 06:04:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:17:30.874 06:04:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:17:30.874 06:04:22 ftl.ftl_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:30.874 06:04:22 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:30.874 Remove shared memory files 00:17:30.874 06:04:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:17:30.874 06:04:22 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:30.874 06:04:22 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:17:30.874 06:04:22 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:17:30.874 06:04:22 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:17:30.874 06:04:22 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:30.874 06:04:22 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:17:30.874 ************************************ 00:17:30.874 END TEST ftl_bdevperf 00:17:30.874 ************************************ 00:17:30.874 00:17:30.874 real 0m20.154s 00:17:30.874 user 0m23.629s 00:17:30.874 sys 0m0.979s 00:17:30.874 06:04:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:30.874 06:04:22 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:30.874 06:04:22 ftl -- common/autotest_common.sh@1142 -- # return 0 00:17:30.874 06:04:22 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:30.874 06:04:22 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:30.874 06:04:22 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:30.874 06:04:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:30.874 ************************************ 00:17:30.874 START TEST ftl_trim 00:17:30.874 ************************************ 00:17:30.874 06:04:22 ftl.ftl_trim -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:31.133 * Looking for test storage... 00:17:31.133 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:31.133 06:04:22 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:31.134 06:04:22 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:31.134 06:04:22 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:17:31.134 06:04:22 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:17:31.134 06:04:22 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:17:31.134 06:04:22 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:17:31.134 06:04:22 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:17:31.134 06:04:22 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:17:31.134 06:04:22 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:17:31.134 06:04:22 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:17:31.134 06:04:22 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:31.134 06:04:22 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:31.134 06:04:22 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:31.134 06:04:22 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=90515 00:17:31.134 06:04:22 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:31.134 06:04:22 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 90515 00:17:31.134 06:04:22 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 90515 ']' 00:17:31.134 06:04:22 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.134 06:04:22 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:31.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.134 06:04:22 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.134 06:04:22 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:31.134 06:04:22 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:31.134 [2024-07-13 06:04:22.795352] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:31.134 [2024-07-13 06:04:22.796585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90515 ] 00:17:31.392 [2024-07-13 06:04:22.954166] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:31.392 [2024-07-13 06:04:23.000694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.392 [2024-07-13 06:04:23.000797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.392 [2024-07-13 06:04:23.000886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.327 06:04:23 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:32.327 06:04:23 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:17:32.327 06:04:23 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:32.327 06:04:23 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:17:32.327 06:04:23 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:32.327 06:04:23 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:17:32.327 06:04:23 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:17:32.327 06:04:23 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:32.327 06:04:24 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:32.327 06:04:24 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:17:32.327 06:04:24 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:32.327 06:04:24 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:32.327 06:04:24 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:32.327 06:04:24 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:32.327 06:04:24 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:32.586 06:04:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:32.586 06:04:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:32.586 { 00:17:32.586 "name": "nvme0n1", 00:17:32.586 "aliases": [ 00:17:32.586 "a4ad19ce-631c-47ff-a331-6b82ff6ca176" 00:17:32.586 ], 00:17:32.586 "product_name": "NVMe disk", 00:17:32.586 "block_size": 4096, 00:17:32.586 "num_blocks": 1310720, 00:17:32.586 "uuid": "a4ad19ce-631c-47ff-a331-6b82ff6ca176", 00:17:32.586 "assigned_rate_limits": { 00:17:32.586 "rw_ios_per_sec": 0, 00:17:32.586 "rw_mbytes_per_sec": 0, 00:17:32.586 "r_mbytes_per_sec": 0, 00:17:32.586 "w_mbytes_per_sec": 0 00:17:32.586 }, 00:17:32.586 "claimed": true, 00:17:32.586 "claim_type": "read_many_write_one", 00:17:32.586 "zoned": false, 00:17:32.586 "supported_io_types": { 00:17:32.586 "read": true, 00:17:32.586 "write": true, 00:17:32.586 "unmap": true, 00:17:32.586 "flush": true, 00:17:32.586 "reset": true, 00:17:32.586 "nvme_admin": true, 00:17:32.586 "nvme_io": true, 00:17:32.586 "nvme_io_md": false, 00:17:32.586 "write_zeroes": true, 00:17:32.586 "zcopy": false, 00:17:32.586 "get_zone_info": false, 00:17:32.586 "zone_management": false, 00:17:32.586 "zone_append": false, 00:17:32.586 "compare": true, 00:17:32.586 "compare_and_write": false, 00:17:32.586 "abort": true, 00:17:32.586 "seek_hole": false, 00:17:32.586 "seek_data": false, 00:17:32.586 "copy": true, 00:17:32.586 "nvme_iov_md": false 00:17:32.586 }, 00:17:32.586 "driver_specific": { 00:17:32.586 "nvme": [ 00:17:32.586 { 00:17:32.586 "pci_address": "0000:00:11.0", 00:17:32.586 "trid": { 00:17:32.586 "trtype": "PCIe", 00:17:32.586 "traddr": "0000:00:11.0" 00:17:32.586 }, 00:17:32.586 "ctrlr_data": { 00:17:32.586 "cntlid": 0, 00:17:32.586 "vendor_id": "0x1b36", 00:17:32.586 "model_number": "QEMU NVMe Ctrl", 00:17:32.586 "serial_number": "12341", 00:17:32.586 "firmware_revision": "8.0.0", 00:17:32.586 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:32.586 "oacs": { 00:17:32.586 "security": 0, 00:17:32.586 "format": 1, 00:17:32.586 "firmware": 0, 00:17:32.586 "ns_manage": 1 00:17:32.586 }, 00:17:32.586 "multi_ctrlr": false, 00:17:32.586 "ana_reporting": false 00:17:32.586 }, 00:17:32.586 "vs": { 00:17:32.586 "nvme_version": "1.4" 00:17:32.586 }, 00:17:32.586 "ns_data": { 00:17:32.586 "id": 1, 00:17:32.586 "can_share": false 00:17:32.586 } 00:17:32.586 } 00:17:32.586 ], 00:17:32.586 "mp_policy": "active_passive" 00:17:32.586 } 00:17:32.586 } 00:17:32.586 ]' 00:17:32.586 06:04:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:32.852 06:04:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:32.852 06:04:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:32.852 06:04:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:32.852 06:04:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:32.852 06:04:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:17:32.852 06:04:24 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:17:32.852 06:04:24 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:32.852 06:04:24 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:17:32.852 06:04:24 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:32.853 06:04:24 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:33.112 06:04:24 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=09c19a18-76f8-4960-b0f2-9abc8a62f31d 00:17:33.112 06:04:24 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:17:33.112 06:04:24 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 09c19a18-76f8-4960-b0f2-9abc8a62f31d 00:17:33.370 06:04:24 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:33.629 06:04:25 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=8095fdbf-f51a-4b5d-a213-75c37daa9449 00:17:33.629 06:04:25 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8095fdbf-f51a-4b5d-a213-75c37daa9449 00:17:33.888 06:04:25 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=a97b5dd0-e58c-4e18-bcbf-2ffc517bc32b 00:17:33.888 06:04:25 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a97b5dd0-e58c-4e18-bcbf-2ffc517bc32b 00:17:33.888 06:04:25 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:17:33.888 06:04:25 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:33.888 06:04:25 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=a97b5dd0-e58c-4e18-bcbf-2ffc517bc32b 00:17:33.888 06:04:25 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:17:33.888 06:04:25 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size a97b5dd0-e58c-4e18-bcbf-2ffc517bc32b 00:17:33.888 06:04:25 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=a97b5dd0-e58c-4e18-bcbf-2ffc517bc32b 00:17:33.888 06:04:25 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:33.888 06:04:25 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:33.888 06:04:25 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:33.888 06:04:25 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a97b5dd0-e58c-4e18-bcbf-2ffc517bc32b 00:17:34.146 06:04:25 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:34.146 { 00:17:34.146 "name": "a97b5dd0-e58c-4e18-bcbf-2ffc517bc32b", 00:17:34.146 "aliases": [ 00:17:34.146 "lvs/nvme0n1p0" 00:17:34.146 ], 00:17:34.146 "product_name": "Logical Volume", 00:17:34.146 "block_size": 4096, 00:17:34.146 "num_blocks": 26476544, 00:17:34.146 "uuid": "a97b5dd0-e58c-4e18-bcbf-2ffc517bc32b", 00:17:34.146 "assigned_rate_limits": { 00:17:34.146 "rw_ios_per_sec": 0, 00:17:34.146 "rw_mbytes_per_sec": 0, 00:17:34.146 "r_mbytes_per_sec": 0, 00:17:34.146 "w_mbytes_per_sec": 0 00:17:34.146 }, 00:17:34.146 "claimed": false, 00:17:34.146 "zoned": false, 00:17:34.146 "supported_io_types": { 00:17:34.146 "read": true, 00:17:34.146 "write": true, 00:17:34.146 "unmap": true, 00:17:34.146 "flush": false, 00:17:34.146 "reset": true, 00:17:34.146 "nvme_admin": false, 00:17:34.146 "nvme_io": false, 00:17:34.146 "nvme_io_md": false, 00:17:34.146 "write_zeroes": true, 00:17:34.146 "zcopy": false, 00:17:34.146 "get_zone_info": false, 00:17:34.146 "zone_management": false, 00:17:34.146 "zone_append": false, 00:17:34.146 "compare": false, 00:17:34.146 "compare_and_write": false, 00:17:34.146 "abort": false, 00:17:34.146 "seek_hole": true, 00:17:34.146 "seek_data": true, 00:17:34.146 "copy": false, 00:17:34.146 "nvme_iov_md": false 00:17:34.146 }, 00:17:34.146 "driver_specific": { 00:17:34.146 "lvol": { 00:17:34.146 "lvol_store_uuid": "8095fdbf-f51a-4b5d-a213-75c37daa9449", 00:17:34.146 "base_bdev": "nvme0n1", 00:17:34.146 "thin_provision": true, 00:17:34.146 "num_allocated_clusters": 0, 00:17:34.146 "snapshot": false, 00:17:34.146 "clone": false, 00:17:34.146 "esnap_clone": false 00:17:34.146 } 00:17:34.146 } 00:17:34.146 } 00:17:34.146 ]' 00:17:34.146 06:04:25 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:34.146 06:04:25 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:34.146 06:04:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:34.146 06:04:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:34.146 06:04:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:34.146 06:04:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:34.146 06:04:25 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:17:34.146 06:04:25 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:17:34.146 06:04:25 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:34.404 06:04:26 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:34.404 06:04:26 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:34.404 06:04:26 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size a97b5dd0-e58c-4e18-bcbf-2ffc517bc32b 00:17:34.404 06:04:26 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=a97b5dd0-e58c-4e18-bcbf-2ffc517bc32b 00:17:34.404 06:04:26 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:34.404 06:04:26 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:34.404 06:04:26 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:34.404 06:04:26 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a97b5dd0-e58c-4e18-bcbf-2ffc517bc32b 00:17:34.661 06:04:26 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:34.661 { 00:17:34.661 "name": "a97b5dd0-e58c-4e18-bcbf-2ffc517bc32b", 00:17:34.661 "aliases": [ 00:17:34.661 "lvs/nvme0n1p0" 00:17:34.661 ], 00:17:34.661 "product_name": "Logical Volume", 00:17:34.661 "block_size": 4096, 00:17:34.661 "num_blocks": 26476544, 00:17:34.661 "uuid": "a97b5dd0-e58c-4e18-bcbf-2ffc517bc32b", 00:17:34.661 "assigned_rate_limits": { 00:17:34.661 "rw_ios_per_sec": 0, 00:17:34.661 "rw_mbytes_per_sec": 0, 00:17:34.661 "r_mbytes_per_sec": 0, 00:17:34.661 "w_mbytes_per_sec": 0 00:17:34.661 }, 00:17:34.661 "claimed": false, 00:17:34.661 "zoned": false, 00:17:34.661 "supported_io_types": { 00:17:34.661 "read": true, 00:17:34.661 "write": true, 00:17:34.661 "unmap": true, 00:17:34.661 "flush": false, 00:17:34.661 "reset": true, 00:17:34.661 "nvme_admin": false, 00:17:34.661 "nvme_io": false, 00:17:34.661 "nvme_io_md": false, 00:17:34.661 "write_zeroes": true, 00:17:34.661 "zcopy": false, 00:17:34.661 "get_zone_info": false, 00:17:34.661 "zone_management": false, 00:17:34.661 "zone_append": false, 00:17:34.661 "compare": false, 00:17:34.661 "compare_and_write": false, 00:17:34.661 "abort": false, 00:17:34.661 "seek_hole": true, 00:17:34.661 "seek_data": true, 00:17:34.661 "copy": false, 00:17:34.661 "nvme_iov_md": false 00:17:34.661 }, 00:17:34.661 "driver_specific": { 00:17:34.661 "lvol": { 00:17:34.661 "lvol_store_uuid": "8095fdbf-f51a-4b5d-a213-75c37daa9449", 00:17:34.661 "base_bdev": "nvme0n1", 00:17:34.661 "thin_provision": true, 00:17:34.661 "num_allocated_clusters": 0, 00:17:34.661 "snapshot": false, 00:17:34.661 "clone": false, 00:17:34.662 "esnap_clone": false 00:17:34.662 } 00:17:34.662 } 00:17:34.662 } 00:17:34.662 ]' 00:17:34.662 06:04:26 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:34.920 06:04:26 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:34.920 06:04:26 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:34.920 06:04:26 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:34.920 06:04:26 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:34.920 06:04:26 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:34.920 06:04:26 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:17:34.920 06:04:26 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:35.178 06:04:26 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:17:35.178 06:04:26 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:17:35.178 06:04:26 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size a97b5dd0-e58c-4e18-bcbf-2ffc517bc32b 00:17:35.178 06:04:26 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=a97b5dd0-e58c-4e18-bcbf-2ffc517bc32b 00:17:35.178 06:04:26 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:35.178 06:04:26 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:35.178 06:04:26 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:35.178 06:04:26 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a97b5dd0-e58c-4e18-bcbf-2ffc517bc32b 00:17:35.436 06:04:27 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:35.436 { 00:17:35.436 "name": "a97b5dd0-e58c-4e18-bcbf-2ffc517bc32b", 00:17:35.436 "aliases": [ 00:17:35.436 "lvs/nvme0n1p0" 00:17:35.436 ], 00:17:35.436 "product_name": "Logical Volume", 00:17:35.436 "block_size": 4096, 00:17:35.436 "num_blocks": 26476544, 00:17:35.436 "uuid": "a97b5dd0-e58c-4e18-bcbf-2ffc517bc32b", 00:17:35.436 "assigned_rate_limits": { 00:17:35.436 "rw_ios_per_sec": 0, 00:17:35.436 "rw_mbytes_per_sec": 0, 00:17:35.436 "r_mbytes_per_sec": 0, 00:17:35.436 "w_mbytes_per_sec": 0 00:17:35.436 }, 00:17:35.436 "claimed": false, 00:17:35.436 "zoned": false, 00:17:35.436 "supported_io_types": { 00:17:35.436 "read": true, 00:17:35.436 "write": true, 00:17:35.436 "unmap": true, 00:17:35.436 "flush": false, 00:17:35.436 "reset": true, 00:17:35.436 "nvme_admin": false, 00:17:35.436 "nvme_io": false, 00:17:35.436 "nvme_io_md": false, 00:17:35.436 "write_zeroes": true, 00:17:35.436 "zcopy": false, 00:17:35.436 "get_zone_info": false, 00:17:35.436 "zone_management": false, 00:17:35.436 "zone_append": false, 00:17:35.436 "compare": false, 00:17:35.436 "compare_and_write": false, 00:17:35.436 "abort": false, 00:17:35.436 "seek_hole": true, 00:17:35.436 "seek_data": true, 00:17:35.436 "copy": false, 00:17:35.436 "nvme_iov_md": false 00:17:35.436 }, 00:17:35.436 "driver_specific": { 00:17:35.436 "lvol": { 00:17:35.436 "lvol_store_uuid": "8095fdbf-f51a-4b5d-a213-75c37daa9449", 00:17:35.436 "base_bdev": "nvme0n1", 00:17:35.436 "thin_provision": true, 00:17:35.436 "num_allocated_clusters": 0, 00:17:35.436 "snapshot": false, 00:17:35.436 "clone": false, 00:17:35.436 "esnap_clone": false 00:17:35.436 } 00:17:35.436 } 00:17:35.436 } 00:17:35.436 ]' 00:17:35.436 06:04:27 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:35.436 06:04:27 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:35.436 06:04:27 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:35.436 06:04:27 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:35.436 06:04:27 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:35.436 06:04:27 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:35.436 06:04:27 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:35.436 06:04:27 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a97b5dd0-e58c-4e18-bcbf-2ffc517bc32b -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:35.695 [2024-07-13 06:04:27.365876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.696 [2024-07-13 06:04:27.365938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:35.696 [2024-07-13 06:04:27.365964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:35.696 [2024-07-13 06:04:27.365980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.696 [2024-07-13 06:04:27.368980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.696 [2024-07-13 06:04:27.369026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:35.696 [2024-07-13 06:04:27.369065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.953 ms 00:17:35.696 [2024-07-13 06:04:27.369077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.696 [2024-07-13 06:04:27.369305] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:35.696 [2024-07-13 06:04:27.369649] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:35.696 [2024-07-13 06:04:27.369697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.696 [2024-07-13 06:04:27.369713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:35.696 [2024-07-13 06:04:27.369733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:17:35.696 [2024-07-13 06:04:27.369745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.696 [2024-07-13 06:04:27.369989] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1a49c5a3-2b59-4791-9c81-0428a1736fe5 00:17:35.696 [2024-07-13 06:04:27.371004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.696 [2024-07-13 06:04:27.371048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:35.696 [2024-07-13 06:04:27.371084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:35.696 [2024-07-13 06:04:27.371098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.696 [2024-07-13 06:04:27.375843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.696 [2024-07-13 06:04:27.375892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:35.696 [2024-07-13 06:04:27.375926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.622 ms 00:17:35.696 [2024-07-13 06:04:27.375941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.696 [2024-07-13 06:04:27.376107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.696 [2024-07-13 06:04:27.376172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:35.696 [2024-07-13 06:04:27.376205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:35.696 [2024-07-13 06:04:27.376224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.696 [2024-07-13 06:04:27.376273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.696 [2024-07-13 06:04:27.376291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:35.696 [2024-07-13 06:04:27.376305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:35.696 [2024-07-13 06:04:27.376320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.696 [2024-07-13 06:04:27.376367] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:35.696 [2024-07-13 06:04:27.377905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.696 [2024-07-13 06:04:27.377942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:35.696 [2024-07-13 06:04:27.377981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.544 ms 00:17:35.696 [2024-07-13 06:04:27.377993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.696 [2024-07-13 06:04:27.378051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.696 [2024-07-13 06:04:27.378067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:35.696 [2024-07-13 06:04:27.378081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:35.696 [2024-07-13 06:04:27.378093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.696 [2024-07-13 06:04:27.378139] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:35.696 [2024-07-13 06:04:27.378347] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:35.696 [2024-07-13 06:04:27.378378] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:35.696 [2024-07-13 06:04:27.378395] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:35.696 [2024-07-13 06:04:27.378412] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:35.696 [2024-07-13 06:04:27.378430] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:35.696 [2024-07-13 06:04:27.378445] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:35.696 [2024-07-13 06:04:27.378456] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:35.696 [2024-07-13 06:04:27.378470] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:35.696 [2024-07-13 06:04:27.378481] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:35.696 [2024-07-13 06:04:27.378499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.696 [2024-07-13 06:04:27.378511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:35.696 [2024-07-13 06:04:27.378540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:17:35.696 [2024-07-13 06:04:27.378568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.696 [2024-07-13 06:04:27.378677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.696 [2024-07-13 06:04:27.378693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:35.696 [2024-07-13 06:04:27.378710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:35.696 [2024-07-13 06:04:27.378722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.696 [2024-07-13 06:04:27.378855] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:35.696 [2024-07-13 06:04:27.378874] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:35.696 [2024-07-13 06:04:27.378908] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:35.696 [2024-07-13 06:04:27.378937] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.696 [2024-07-13 06:04:27.378968] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:35.696 [2024-07-13 06:04:27.378981] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:35.696 [2024-07-13 06:04:27.378994] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:35.696 [2024-07-13 06:04:27.379005] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:35.696 [2024-07-13 06:04:27.379017] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:35.696 [2024-07-13 06:04:27.379028] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:35.696 [2024-07-13 06:04:27.379041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:35.696 [2024-07-13 06:04:27.379081] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:35.696 [2024-07-13 06:04:27.379096] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:35.696 [2024-07-13 06:04:27.379108] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:35.696 [2024-07-13 06:04:27.379124] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:35.696 [2024-07-13 06:04:27.379148] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.696 [2024-07-13 06:04:27.379163] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:35.696 [2024-07-13 06:04:27.379174] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:35.696 [2024-07-13 06:04:27.379187] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.696 [2024-07-13 06:04:27.379199] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:35.696 [2024-07-13 06:04:27.379211] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:35.696 [2024-07-13 06:04:27.379222] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:35.696 [2024-07-13 06:04:27.379235] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:35.696 [2024-07-13 06:04:27.379245] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:35.696 [2024-07-13 06:04:27.379258] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:35.696 [2024-07-13 06:04:27.379269] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:35.696 [2024-07-13 06:04:27.379282] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:35.696 [2024-07-13 06:04:27.379292] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:35.696 [2024-07-13 06:04:27.379305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:35.696 [2024-07-13 06:04:27.379316] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:35.696 [2024-07-13 06:04:27.379332] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:35.696 [2024-07-13 06:04:27.379344] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:35.696 [2024-07-13 06:04:27.379356] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:35.696 [2024-07-13 06:04:27.379386] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:35.696 [2024-07-13 06:04:27.379401] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:35.696 [2024-07-13 06:04:27.379415] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:35.696 [2024-07-13 06:04:27.379428] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:35.696 [2024-07-13 06:04:27.379438] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:35.696 [2024-07-13 06:04:27.379451] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:35.696 [2024-07-13 06:04:27.379461] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.696 [2024-07-13 06:04:27.379474] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:35.696 [2024-07-13 06:04:27.379485] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:35.696 [2024-07-13 06:04:27.379497] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.696 [2024-07-13 06:04:27.379508] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:35.696 [2024-07-13 06:04:27.379522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:35.696 [2024-07-13 06:04:27.379534] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:35.696 [2024-07-13 06:04:27.379550] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:35.696 [2024-07-13 06:04:27.379562] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:35.696 [2024-07-13 06:04:27.379575] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:35.696 [2024-07-13 06:04:27.379585] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:35.696 [2024-07-13 06:04:27.379598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:35.696 [2024-07-13 06:04:27.379609] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:35.696 [2024-07-13 06:04:27.379622] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:35.697 [2024-07-13 06:04:27.379639] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:35.697 [2024-07-13 06:04:27.379656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:35.697 [2024-07-13 06:04:27.379669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:35.697 [2024-07-13 06:04:27.379683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:35.697 [2024-07-13 06:04:27.379695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:35.697 [2024-07-13 06:04:27.379709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:35.697 [2024-07-13 06:04:27.379722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:35.697 [2024-07-13 06:04:27.379735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:35.697 [2024-07-13 06:04:27.379748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:35.697 [2024-07-13 06:04:27.379763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:35.697 [2024-07-13 06:04:27.379776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:35.697 [2024-07-13 06:04:27.379790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:35.697 [2024-07-13 06:04:27.379802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:35.697 [2024-07-13 06:04:27.379816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:35.697 [2024-07-13 06:04:27.379828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:35.697 [2024-07-13 06:04:27.379842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:35.697 [2024-07-13 06:04:27.379854] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:35.697 [2024-07-13 06:04:27.379870] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:35.697 [2024-07-13 06:04:27.379886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:35.697 [2024-07-13 06:04:27.379900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:35.697 [2024-07-13 06:04:27.379912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:35.697 [2024-07-13 06:04:27.379926] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:35.697 [2024-07-13 06:04:27.379939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.697 [2024-07-13 06:04:27.379973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:35.697 [2024-07-13 06:04:27.379987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.154 ms 00:17:35.697 [2024-07-13 06:04:27.380003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.697 [2024-07-13 06:04:27.380128] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:35.697 [2024-07-13 06:04:27.380175] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:38.222 [2024-07-13 06:04:29.379861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.222 [2024-07-13 06:04:29.379939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:38.222 [2024-07-13 06:04:29.379980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1999.745 ms 00:17:38.222 [2024-07-13 06:04:29.380017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.222 [2024-07-13 06:04:29.387622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.222 [2024-07-13 06:04:29.387681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:38.222 [2024-07-13 06:04:29.387717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.465 ms 00:17:38.222 [2024-07-13 06:04:29.387732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.222 [2024-07-13 06:04:29.387901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.222 [2024-07-13 06:04:29.387927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:38.222 [2024-07-13 06:04:29.387941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:38.222 [2024-07-13 06:04:29.387958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.222 [2024-07-13 06:04:29.419357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.222 [2024-07-13 06:04:29.419446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:38.222 [2024-07-13 06:04:29.419490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.357 ms 00:17:38.222 [2024-07-13 06:04:29.419516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.222 [2024-07-13 06:04:29.419706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.222 [2024-07-13 06:04:29.419746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:38.222 [2024-07-13 06:04:29.419775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:38.222 [2024-07-13 06:04:29.419799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.222 [2024-07-13 06:04:29.420281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.222 [2024-07-13 06:04:29.420343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:38.222 [2024-07-13 06:04:29.420371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:17:38.222 [2024-07-13 06:04:29.420395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.222 [2024-07-13 06:04:29.420668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.222 [2024-07-13 06:04:29.420718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:38.222 [2024-07-13 06:04:29.420743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:17:38.222 [2024-07-13 06:04:29.420770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.222 [2024-07-13 06:04:29.427976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.222 [2024-07-13 06:04:29.428028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:38.222 [2024-07-13 06:04:29.428062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.144 ms 00:17:38.222 [2024-07-13 06:04:29.428076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.222 [2024-07-13 06:04:29.437575] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:38.222 [2024-07-13 06:04:29.451988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.222 [2024-07-13 06:04:29.452051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:38.222 [2024-07-13 06:04:29.452092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.729 ms 00:17:38.222 [2024-07-13 06:04:29.452104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.222 [2024-07-13 06:04:29.500624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.222 [2024-07-13 06:04:29.500693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:38.222 [2024-07-13 06:04:29.500735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.355 ms 00:17:38.222 [2024-07-13 06:04:29.500748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.222 [2024-07-13 06:04:29.501007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.222 [2024-07-13 06:04:29.501028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:38.222 [2024-07-13 06:04:29.501044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:17:38.222 [2024-07-13 06:04:29.501056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.222 [2024-07-13 06:04:29.504580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.222 [2024-07-13 06:04:29.504620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:38.222 [2024-07-13 06:04:29.504657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.479 ms 00:17:38.223 [2024-07-13 06:04:29.504685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.223 [2024-07-13 06:04:29.507934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.223 [2024-07-13 06:04:29.507972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:38.223 [2024-07-13 06:04:29.508008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.187 ms 00:17:38.223 [2024-07-13 06:04:29.508020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.223 [2024-07-13 06:04:29.508411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.223 [2024-07-13 06:04:29.508437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:38.223 [2024-07-13 06:04:29.508453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:17:38.223 [2024-07-13 06:04:29.508465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.223 [2024-07-13 06:04:29.540517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.223 [2024-07-13 06:04:29.540581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:38.223 [2024-07-13 06:04:29.540621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.985 ms 00:17:38.223 [2024-07-13 06:04:29.540635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.223 [2024-07-13 06:04:29.544792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.223 [2024-07-13 06:04:29.544833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:38.223 [2024-07-13 06:04:29.544869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.092 ms 00:17:38.223 [2024-07-13 06:04:29.544882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.223 [2024-07-13 06:04:29.548493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.223 [2024-07-13 06:04:29.548532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:38.223 [2024-07-13 06:04:29.548568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.531 ms 00:17:38.223 [2024-07-13 06:04:29.548580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.223 [2024-07-13 06:04:29.552375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.223 [2024-07-13 06:04:29.552414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:38.223 [2024-07-13 06:04:29.552451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.734 ms 00:17:38.223 [2024-07-13 06:04:29.552463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.223 [2024-07-13 06:04:29.552528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.223 [2024-07-13 06:04:29.552545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:38.223 [2024-07-13 06:04:29.552560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:38.223 [2024-07-13 06:04:29.552572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.223 [2024-07-13 06:04:29.552659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.223 [2024-07-13 06:04:29.552676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:38.223 [2024-07-13 06:04:29.552691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:38.223 [2024-07-13 06:04:29.552702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.223 [2024-07-13 06:04:29.553829] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:38.223 [2024-07-13 06:04:29.555033] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2187.550 ms, result 0 00:17:38.223 { 00:17:38.223 "name": "ftl0", 00:17:38.223 "uuid": "1a49c5a3-2b59-4791-9c81-0428a1736fe5" 00:17:38.223 } 00:17:38.223 [2024-07-13 06:04:29.555812] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:38.223 06:04:29 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:38.223 06:04:29 ftl.ftl_trim -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:17:38.223 06:04:29 ftl.ftl_trim -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:38.223 06:04:29 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local i 00:17:38.223 06:04:29 ftl.ftl_trim -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:38.223 06:04:29 ftl.ftl_trim -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:38.223 06:04:29 ftl.ftl_trim -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:38.223 06:04:29 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:38.482 [ 00:17:38.482 { 00:17:38.482 "name": "ftl0", 00:17:38.482 "aliases": [ 00:17:38.482 "1a49c5a3-2b59-4791-9c81-0428a1736fe5" 00:17:38.482 ], 00:17:38.482 "product_name": "FTL disk", 00:17:38.482 "block_size": 4096, 00:17:38.482 "num_blocks": 23592960, 00:17:38.482 "uuid": "1a49c5a3-2b59-4791-9c81-0428a1736fe5", 00:17:38.482 "assigned_rate_limits": { 00:17:38.482 "rw_ios_per_sec": 0, 00:17:38.482 "rw_mbytes_per_sec": 0, 00:17:38.482 "r_mbytes_per_sec": 0, 00:17:38.482 "w_mbytes_per_sec": 0 00:17:38.482 }, 00:17:38.482 "claimed": false, 00:17:38.482 "zoned": false, 00:17:38.482 "supported_io_types": { 00:17:38.482 "read": true, 00:17:38.482 "write": true, 00:17:38.482 "unmap": true, 00:17:38.482 "flush": true, 00:17:38.482 "reset": false, 00:17:38.482 "nvme_admin": false, 00:17:38.482 "nvme_io": false, 00:17:38.482 "nvme_io_md": false, 00:17:38.482 "write_zeroes": true, 00:17:38.482 "zcopy": false, 00:17:38.482 "get_zone_info": false, 00:17:38.482 "zone_management": false, 00:17:38.482 "zone_append": false, 00:17:38.482 "compare": false, 00:17:38.482 "compare_and_write": false, 00:17:38.482 "abort": false, 00:17:38.482 "seek_hole": false, 00:17:38.482 "seek_data": false, 00:17:38.482 "copy": false, 00:17:38.482 "nvme_iov_md": false 00:17:38.482 }, 00:17:38.482 "driver_specific": { 00:17:38.482 "ftl": { 00:17:38.482 "base_bdev": "a97b5dd0-e58c-4e18-bcbf-2ffc517bc32b", 00:17:38.482 "cache": "nvc0n1p0" 00:17:38.482 } 00:17:38.482 } 00:17:38.482 } 00:17:38.482 ] 00:17:38.482 06:04:30 ftl.ftl_trim -- common/autotest_common.sh@905 -- # return 0 00:17:38.482 06:04:30 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:38.482 06:04:30 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:38.740 06:04:30 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:17:38.740 06:04:30 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:38.998 06:04:30 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:38.998 { 00:17:38.998 "name": "ftl0", 00:17:38.998 "aliases": [ 00:17:38.998 "1a49c5a3-2b59-4791-9c81-0428a1736fe5" 00:17:38.998 ], 00:17:38.998 "product_name": "FTL disk", 00:17:38.998 "block_size": 4096, 00:17:38.998 "num_blocks": 23592960, 00:17:38.998 "uuid": "1a49c5a3-2b59-4791-9c81-0428a1736fe5", 00:17:38.998 "assigned_rate_limits": { 00:17:38.998 "rw_ios_per_sec": 0, 00:17:38.998 "rw_mbytes_per_sec": 0, 00:17:38.998 "r_mbytes_per_sec": 0, 00:17:38.998 "w_mbytes_per_sec": 0 00:17:38.998 }, 00:17:38.998 "claimed": false, 00:17:38.998 "zoned": false, 00:17:38.998 "supported_io_types": { 00:17:38.998 "read": true, 00:17:38.998 "write": true, 00:17:38.998 "unmap": true, 00:17:38.998 "flush": true, 00:17:38.998 "reset": false, 00:17:38.998 "nvme_admin": false, 00:17:38.998 "nvme_io": false, 00:17:38.998 "nvme_io_md": false, 00:17:38.998 "write_zeroes": true, 00:17:38.998 "zcopy": false, 00:17:38.998 "get_zone_info": false, 00:17:38.998 "zone_management": false, 00:17:38.998 "zone_append": false, 00:17:38.998 "compare": false, 00:17:38.998 "compare_and_write": false, 00:17:38.998 "abort": false, 00:17:38.998 "seek_hole": false, 00:17:38.998 "seek_data": false, 00:17:38.998 "copy": false, 00:17:38.998 "nvme_iov_md": false 00:17:38.998 }, 00:17:38.998 "driver_specific": { 00:17:38.998 "ftl": { 00:17:38.998 "base_bdev": "a97b5dd0-e58c-4e18-bcbf-2ffc517bc32b", 00:17:38.998 "cache": "nvc0n1p0" 00:17:38.998 } 00:17:38.998 } 00:17:38.998 } 00:17:38.998 ]' 00:17:38.998 06:04:30 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:39.257 06:04:30 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:17:39.257 06:04:30 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:39.257 [2024-07-13 06:04:30.967502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.257 [2024-07-13 06:04:30.967574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:39.257 [2024-07-13 06:04:30.967596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:39.257 [2024-07-13 06:04:30.967611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.257 [2024-07-13 06:04:30.967660] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:39.257 [2024-07-13 06:04:30.968113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.257 [2024-07-13 06:04:30.968153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:39.257 [2024-07-13 06:04:30.968173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:17:39.257 [2024-07-13 06:04:30.968185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.257 [2024-07-13 06:04:30.968745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.257 [2024-07-13 06:04:30.968772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:39.257 [2024-07-13 06:04:30.968792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:17:39.257 [2024-07-13 06:04:30.968804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.257 [2024-07-13 06:04:30.972541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.257 [2024-07-13 06:04:30.972573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:39.257 [2024-07-13 06:04:30.972592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.700 ms 00:17:39.257 [2024-07-13 06:04:30.972604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.257 [2024-07-13 06:04:30.980300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.257 [2024-07-13 06:04:30.980373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:39.257 [2024-07-13 06:04:30.980481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.614 ms 00:17:39.257 [2024-07-13 06:04:30.980496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.257 [2024-07-13 06:04:30.981872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.257 [2024-07-13 06:04:30.981916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:39.257 [2024-07-13 06:04:30.981937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.235 ms 00:17:39.257 [2024-07-13 06:04:30.981949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.517 [2024-07-13 06:04:30.985579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.517 [2024-07-13 06:04:30.985625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:39.517 [2024-07-13 06:04:30.985647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.540 ms 00:17:39.517 [2024-07-13 06:04:30.985660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.517 [2024-07-13 06:04:30.985919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.517 [2024-07-13 06:04:30.985946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:39.517 [2024-07-13 06:04:30.985966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:17:39.517 [2024-07-13 06:04:30.985994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.517 [2024-07-13 06:04:30.987588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.517 [2024-07-13 06:04:30.987627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:39.517 [2024-07-13 06:04:30.987647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.548 ms 00:17:39.517 [2024-07-13 06:04:30.987659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.517 [2024-07-13 06:04:30.988995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.517 [2024-07-13 06:04:30.989048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:39.517 [2024-07-13 06:04:30.989083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.277 ms 00:17:39.517 [2024-07-13 06:04:30.989095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.517 [2024-07-13 06:04:30.990123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.517 [2024-07-13 06:04:30.990174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:39.517 [2024-07-13 06:04:30.990193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:17:39.517 [2024-07-13 06:04:30.990212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.517 [2024-07-13 06:04:30.991374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.517 [2024-07-13 06:04:30.991410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:39.517 [2024-07-13 06:04:30.991431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.047 ms 00:17:39.517 [2024-07-13 06:04:30.991443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.517 [2024-07-13 06:04:30.991501] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:39.517 [2024-07-13 06:04:30.991524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:39.517 [2024-07-13 06:04:30.991954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.991970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.991983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.991997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.992010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.992026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.992039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.992053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.992066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.992080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.992093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.992108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.992121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.992337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.992420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.992491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.992634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.992711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.992771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.992908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.992979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:39.518 [2024-07-13 06:04:30.993958] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:39.518 [2024-07-13 06:04:30.993974] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1a49c5a3-2b59-4791-9c81-0428a1736fe5 00:17:39.518 [2024-07-13 06:04:30.993987] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:39.518 [2024-07-13 06:04:30.994000] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:39.518 [2024-07-13 06:04:30.994018] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:39.518 [2024-07-13 06:04:30.994033] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:39.518 [2024-07-13 06:04:30.994063] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:39.518 [2024-07-13 06:04:30.994079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:39.518 [2024-07-13 06:04:30.994091] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:39.518 [2024-07-13 06:04:30.994103] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:39.518 [2024-07-13 06:04:30.994113] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:39.518 [2024-07-13 06:04:30.994141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.518 [2024-07-13 06:04:30.994157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:39.518 [2024-07-13 06:04:30.994173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.630 ms 00:17:39.518 [2024-07-13 06:04:30.994185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.518 [2024-07-13 06:04:30.995671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.518 [2024-07-13 06:04:30.995697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:39.518 [2024-07-13 06:04:30.995714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.416 ms 00:17:39.518 [2024-07-13 06:04:30.995742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.518 [2024-07-13 06:04:30.995852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.518 [2024-07-13 06:04:30.995874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:39.518 [2024-07-13 06:04:30.995890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:39.518 [2024-07-13 06:04:30.995916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.518 [2024-07-13 06:04:31.001684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.518 [2024-07-13 06:04:31.001864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:39.518 [2024-07-13 06:04:31.001991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.518 [2024-07-13 06:04:31.002065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.518 [2024-07-13 06:04:31.002346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.518 [2024-07-13 06:04:31.002488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:39.519 [2024-07-13 06:04:31.002614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.519 [2024-07-13 06:04:31.002760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.519 [2024-07-13 06:04:31.002905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.519 [2024-07-13 06:04:31.002991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:39.519 [2024-07-13 06:04:31.003121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.519 [2024-07-13 06:04:31.003262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.519 [2024-07-13 06:04:31.003355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.519 [2024-07-13 06:04:31.003476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:39.519 [2024-07-13 06:04:31.003538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.519 [2024-07-13 06:04:31.003637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.519 [2024-07-13 06:04:31.012426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.519 [2024-07-13 06:04:31.012689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:39.519 [2024-07-13 06:04:31.012831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.519 [2024-07-13 06:04:31.012884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.519 [2024-07-13 06:04:31.019653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.519 [2024-07-13 06:04:31.019869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:39.519 [2024-07-13 06:04:31.019993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.519 [2024-07-13 06:04:31.020046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.519 [2024-07-13 06:04:31.020192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.519 [2024-07-13 06:04:31.020269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:39.519 [2024-07-13 06:04:31.020400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.519 [2024-07-13 06:04:31.020529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.519 [2024-07-13 06:04:31.020647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.519 [2024-07-13 06:04:31.020793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:39.519 [2024-07-13 06:04:31.020861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.519 [2024-07-13 06:04:31.020979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.519 [2024-07-13 06:04:31.021184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.519 [2024-07-13 06:04:31.021255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:39.519 [2024-07-13 06:04:31.021372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.519 [2024-07-13 06:04:31.021436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.519 [2024-07-13 06:04:31.021704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.519 [2024-07-13 06:04:31.021835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:39.519 [2024-07-13 06:04:31.021956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.519 [2024-07-13 06:04:31.021980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.519 [2024-07-13 06:04:31.022055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.519 [2024-07-13 06:04:31.022073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:39.519 [2024-07-13 06:04:31.022088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.519 [2024-07-13 06:04:31.022102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.519 [2024-07-13 06:04:31.022200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.519 [2024-07-13 06:04:31.022221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:39.519 [2024-07-13 06:04:31.022236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.519 [2024-07-13 06:04:31.022249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.519 [2024-07-13 06:04:31.022470] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.936 ms, result 0 00:17:39.519 true 00:17:39.519 06:04:31 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 90515 00:17:39.519 06:04:31 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 90515 ']' 00:17:39.519 06:04:31 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 90515 00:17:39.519 06:04:31 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:17:39.519 06:04:31 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:39.519 06:04:31 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90515 00:17:39.519 06:04:31 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:39.519 06:04:31 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:39.519 06:04:31 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90515' 00:17:39.519 killing process with pid 90515 00:17:39.519 06:04:31 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 90515 00:17:39.519 06:04:31 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 90515 00:17:42.803 06:04:33 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:43.741 65536+0 records in 00:17:43.741 65536+0 records out 00:17:43.741 268435456 bytes (268 MB, 256 MiB) copied, 1.21711 s, 221 MB/s 00:17:43.741 06:04:35 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:43.741 [2024-07-13 06:04:35.286831] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:43.741 [2024-07-13 06:04:35.287018] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90688 ] 00:17:43.741 [2024-07-13 06:04:35.440756] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.000 [2024-07-13 06:04:35.485246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.000 [2024-07-13 06:04:35.576871] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:44.000 [2024-07-13 06:04:35.576991] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:44.261 [2024-07-13 06:04:35.736635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.261 [2024-07-13 06:04:35.736725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:44.261 [2024-07-13 06:04:35.736747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:44.261 [2024-07-13 06:04:35.736759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.261 [2024-07-13 06:04:35.739606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.261 [2024-07-13 06:04:35.739662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:44.261 [2024-07-13 06:04:35.739696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.805 ms 00:17:44.261 [2024-07-13 06:04:35.739707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.261 [2024-07-13 06:04:35.739908] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:44.261 [2024-07-13 06:04:35.740212] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:44.261 [2024-07-13 06:04:35.740250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.261 [2024-07-13 06:04:35.740264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:44.261 [2024-07-13 06:04:35.740287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:17:44.261 [2024-07-13 06:04:35.740298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.261 [2024-07-13 06:04:35.741675] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:44.261 [2024-07-13 06:04:35.744039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.261 [2024-07-13 06:04:35.744081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:44.261 [2024-07-13 06:04:35.744114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.366 ms 00:17:44.261 [2024-07-13 06:04:35.744126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.261 [2024-07-13 06:04:35.744246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.261 [2024-07-13 06:04:35.744269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:44.261 [2024-07-13 06:04:35.744283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:44.261 [2024-07-13 06:04:35.744308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.261 [2024-07-13 06:04:35.748995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.261 [2024-07-13 06:04:35.749041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:44.261 [2024-07-13 06:04:35.749084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.623 ms 00:17:44.261 [2024-07-13 06:04:35.749095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.261 [2024-07-13 06:04:35.749302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.261 [2024-07-13 06:04:35.749327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:44.261 [2024-07-13 06:04:35.749345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:17:44.261 [2024-07-13 06:04:35.749370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.261 [2024-07-13 06:04:35.749423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.261 [2024-07-13 06:04:35.749439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:44.261 [2024-07-13 06:04:35.749452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:44.261 [2024-07-13 06:04:35.749463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.261 [2024-07-13 06:04:35.749495] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:44.261 [2024-07-13 06:04:35.750873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.261 [2024-07-13 06:04:35.750927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:44.261 [2024-07-13 06:04:35.750958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.388 ms 00:17:44.261 [2024-07-13 06:04:35.750969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.261 [2024-07-13 06:04:35.751016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.261 [2024-07-13 06:04:35.751048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:44.261 [2024-07-13 06:04:35.751060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:44.261 [2024-07-13 06:04:35.751070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.261 [2024-07-13 06:04:35.751097] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:44.261 [2024-07-13 06:04:35.751140] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:44.261 [2024-07-13 06:04:35.751287] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:44.261 [2024-07-13 06:04:35.751345] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:17:44.261 [2024-07-13 06:04:35.751499] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:44.261 [2024-07-13 06:04:35.751529] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:44.261 [2024-07-13 06:04:35.751546] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:44.261 [2024-07-13 06:04:35.751562] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:44.261 [2024-07-13 06:04:35.751577] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:44.261 [2024-07-13 06:04:35.751590] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:44.261 [2024-07-13 06:04:35.751602] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:44.261 [2024-07-13 06:04:35.751631] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:44.261 [2024-07-13 06:04:35.751642] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:44.261 [2024-07-13 06:04:35.751675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.262 [2024-07-13 06:04:35.751688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:44.262 [2024-07-13 06:04:35.751700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:17:44.262 [2024-07-13 06:04:35.751715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.262 [2024-07-13 06:04:35.751838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.262 [2024-07-13 06:04:35.751857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:44.262 [2024-07-13 06:04:35.751870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:44.262 [2024-07-13 06:04:35.751881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.262 [2024-07-13 06:04:35.752007] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:44.262 [2024-07-13 06:04:35.752038] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:44.262 [2024-07-13 06:04:35.752051] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.262 [2024-07-13 06:04:35.752062] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.262 [2024-07-13 06:04:35.752074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:44.262 [2024-07-13 06:04:35.752084] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:44.262 [2024-07-13 06:04:35.752095] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:44.262 [2024-07-13 06:04:35.752105] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:44.262 [2024-07-13 06:04:35.752116] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:44.262 [2024-07-13 06:04:35.752126] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.262 [2024-07-13 06:04:35.752164] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:44.262 [2024-07-13 06:04:35.752180] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:44.262 [2024-07-13 06:04:35.752191] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.262 [2024-07-13 06:04:35.752201] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:44.262 [2024-07-13 06:04:35.752212] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:44.262 [2024-07-13 06:04:35.752222] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.262 [2024-07-13 06:04:35.752232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:44.262 [2024-07-13 06:04:35.752243] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:44.262 [2024-07-13 06:04:35.752255] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.262 [2024-07-13 06:04:35.752267] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:44.262 [2024-07-13 06:04:35.752278] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:44.262 [2024-07-13 06:04:35.752289] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.262 [2024-07-13 06:04:35.752299] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:44.262 [2024-07-13 06:04:35.752309] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:44.262 [2024-07-13 06:04:35.752319] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.262 [2024-07-13 06:04:35.752330] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:44.262 [2024-07-13 06:04:35.752345] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:44.262 [2024-07-13 06:04:35.752356] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.262 [2024-07-13 06:04:35.752367] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:44.262 [2024-07-13 06:04:35.752377] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:44.262 [2024-07-13 06:04:35.752387] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.262 [2024-07-13 06:04:35.752397] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:44.262 [2024-07-13 06:04:35.752408] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:44.262 [2024-07-13 06:04:35.752418] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.262 [2024-07-13 06:04:35.752428] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:44.262 [2024-07-13 06:04:35.752438] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:44.262 [2024-07-13 06:04:35.752448] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.262 [2024-07-13 06:04:35.752458] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:44.262 [2024-07-13 06:04:35.752469] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:44.262 [2024-07-13 06:04:35.752479] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.262 [2024-07-13 06:04:35.752489] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:44.262 [2024-07-13 06:04:35.752500] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:44.262 [2024-07-13 06:04:35.752512] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.262 [2024-07-13 06:04:35.752523] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:44.262 [2024-07-13 06:04:35.752534] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:44.262 [2024-07-13 06:04:35.752545] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.262 [2024-07-13 06:04:35.752556] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.262 [2024-07-13 06:04:35.752567] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:44.262 [2024-07-13 06:04:35.752578] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:44.262 [2024-07-13 06:04:35.752588] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:44.262 [2024-07-13 06:04:35.752599] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:44.262 [2024-07-13 06:04:35.752610] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:44.262 [2024-07-13 06:04:35.752620] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:44.262 [2024-07-13 06:04:35.752632] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:44.262 [2024-07-13 06:04:35.752647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.262 [2024-07-13 06:04:35.752671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:44.262 [2024-07-13 06:04:35.752691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:44.262 [2024-07-13 06:04:35.752703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:44.262 [2024-07-13 06:04:35.752740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:44.262 [2024-07-13 06:04:35.752757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:44.262 [2024-07-13 06:04:35.752792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:44.262 [2024-07-13 06:04:35.752806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:44.262 [2024-07-13 06:04:35.752821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:44.262 [2024-07-13 06:04:35.752831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:44.262 [2024-07-13 06:04:35.752842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:44.262 [2024-07-13 06:04:35.752853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:44.262 [2024-07-13 06:04:35.752863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:44.262 [2024-07-13 06:04:35.752873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:44.263 [2024-07-13 06:04:35.752884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:44.263 [2024-07-13 06:04:35.752906] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:44.263 [2024-07-13 06:04:35.752919] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.263 [2024-07-13 06:04:35.752930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:44.263 [2024-07-13 06:04:35.752941] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:44.263 [2024-07-13 06:04:35.752952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:44.263 [2024-07-13 06:04:35.752965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:44.263 [2024-07-13 06:04:35.752978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.263 [2024-07-13 06:04:35.752988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:44.263 [2024-07-13 06:04:35.752999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.040 ms 00:17:44.263 [2024-07-13 06:04:35.753009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.263 [2024-07-13 06:04:35.770036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.263 [2024-07-13 06:04:35.770322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:44.263 [2024-07-13 06:04:35.770477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.945 ms 00:17:44.263 [2024-07-13 06:04:35.770540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.263 [2024-07-13 06:04:35.770839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.263 [2024-07-13 06:04:35.770996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:44.263 [2024-07-13 06:04:35.771125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:44.263 [2024-07-13 06:04:35.771287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.263 [2024-07-13 06:04:35.779293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.263 [2024-07-13 06:04:35.779528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:44.263 [2024-07-13 06:04:35.779647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.919 ms 00:17:44.263 [2024-07-13 06:04:35.779707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.263 [2024-07-13 06:04:35.779828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.263 [2024-07-13 06:04:35.779893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:44.263 [2024-07-13 06:04:35.779935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:44.263 [2024-07-13 06:04:35.780055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.263 [2024-07-13 06:04:35.780570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.263 [2024-07-13 06:04:35.780749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:44.263 [2024-07-13 06:04:35.780885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:17:44.263 [2024-07-13 06:04:35.781037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.263 [2024-07-13 06:04:35.781327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.263 [2024-07-13 06:04:35.781406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:44.263 [2024-07-13 06:04:35.781539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:17:44.263 [2024-07-13 06:04:35.781594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.263 [2024-07-13 06:04:35.786661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.263 [2024-07-13 06:04:35.786847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:44.263 [2024-07-13 06:04:35.786984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.999 ms 00:17:44.263 [2024-07-13 06:04:35.787113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.263 [2024-07-13 06:04:35.789492] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:44.263 [2024-07-13 06:04:35.789714] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:44.263 [2024-07-13 06:04:35.789859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.263 [2024-07-13 06:04:35.789905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:44.263 [2024-07-13 06:04:35.790042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.555 ms 00:17:44.263 [2024-07-13 06:04:35.790061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.263 [2024-07-13 06:04:35.807243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.263 [2024-07-13 06:04:35.807295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:44.263 [2024-07-13 06:04:35.807314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.111 ms 00:17:44.263 [2024-07-13 06:04:35.807326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.263 [2024-07-13 06:04:35.809569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.263 [2024-07-13 06:04:35.809625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:44.263 [2024-07-13 06:04:35.809657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.109 ms 00:17:44.263 [2024-07-13 06:04:35.809668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.263 [2024-07-13 06:04:35.811401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.263 [2024-07-13 06:04:35.811442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:44.263 [2024-07-13 06:04:35.811458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.681 ms 00:17:44.263 [2024-07-13 06:04:35.811470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.263 [2024-07-13 06:04:35.811925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.263 [2024-07-13 06:04:35.811953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:44.263 [2024-07-13 06:04:35.811968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:17:44.263 [2024-07-13 06:04:35.811979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.263 [2024-07-13 06:04:35.828669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.263 [2024-07-13 06:04:35.828740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:44.263 [2024-07-13 06:04:35.828761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.644 ms 00:17:44.263 [2024-07-13 06:04:35.828772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.263 [2024-07-13 06:04:35.837793] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:44.263 [2024-07-13 06:04:35.852250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.263 [2024-07-13 06:04:35.852319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:44.263 [2024-07-13 06:04:35.852339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.370 ms 00:17:44.263 [2024-07-13 06:04:35.852349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.263 [2024-07-13 06:04:35.852481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.263 [2024-07-13 06:04:35.852517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:44.263 [2024-07-13 06:04:35.852547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:44.263 [2024-07-13 06:04:35.852557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.263 [2024-07-13 06:04:35.852638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.263 [2024-07-13 06:04:35.852653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:44.263 [2024-07-13 06:04:35.852665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:44.263 [2024-07-13 06:04:35.852676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.263 [2024-07-13 06:04:35.852708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.263 [2024-07-13 06:04:35.852735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:44.263 [2024-07-13 06:04:35.852747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:44.263 [2024-07-13 06:04:35.852761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.263 [2024-07-13 06:04:35.852798] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:44.263 [2024-07-13 06:04:35.852816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.263 [2024-07-13 06:04:35.852835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:44.263 [2024-07-13 06:04:35.852854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:44.263 [2024-07-13 06:04:35.852864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.263 [2024-07-13 06:04:35.856466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.263 [2024-07-13 06:04:35.856506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:44.263 [2024-07-13 06:04:35.856523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.564 ms 00:17:44.263 [2024-07-13 06:04:35.856533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.263 [2024-07-13 06:04:35.856656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.264 [2024-07-13 06:04:35.856686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:44.264 [2024-07-13 06:04:35.856709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:44.264 [2024-07-13 06:04:35.856719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.264 [2024-07-13 06:04:35.857719] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:44.264 [2024-07-13 06:04:35.858930] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 120.762 ms, result 0 00:17:44.264 [2024-07-13 06:04:35.859682] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:44.264 [2024-07-13 06:04:35.869304] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:55.163  Copying: 23/256 [MB] (23 MBps) Copying: 46/256 [MB] (23 MBps) Copying: 70/256 [MB] (23 MBps) Copying: 94/256 [MB] (24 MBps) Copying: 118/256 [MB] (24 MBps) Copying: 142/256 [MB] (24 MBps) Copying: 167/256 [MB] (24 MBps) Copying: 191/256 [MB] (23 MBps) Copying: 215/256 [MB] (23 MBps) Copying: 237/256 [MB] (22 MBps) Copying: 256/256 [MB] (average 23 MBps)[2024-07-13 06:04:46.636487] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:55.163 [2024-07-13 06:04:46.637637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.163 [2024-07-13 06:04:46.637673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:55.163 [2024-07-13 06:04:46.637696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:55.163 [2024-07-13 06:04:46.637709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.163 [2024-07-13 06:04:46.637738] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:55.164 [2024-07-13 06:04:46.638166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.164 [2024-07-13 06:04:46.638190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:55.164 [2024-07-13 06:04:46.638219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:17:55.164 [2024-07-13 06:04:46.638230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.164 [2024-07-13 06:04:46.639918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.164 [2024-07-13 06:04:46.639958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:55.164 [2024-07-13 06:04:46.639997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.646 ms 00:17:55.164 [2024-07-13 06:04:46.640009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.164 [2024-07-13 06:04:46.646979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.164 [2024-07-13 06:04:46.647017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:55.164 [2024-07-13 06:04:46.647050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.946 ms 00:17:55.164 [2024-07-13 06:04:46.647062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.164 [2024-07-13 06:04:46.654772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.164 [2024-07-13 06:04:46.654808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:55.164 [2024-07-13 06:04:46.654823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.645 ms 00:17:55.164 [2024-07-13 06:04:46.654842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.164 [2024-07-13 06:04:46.656020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.164 [2024-07-13 06:04:46.656075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:55.164 [2024-07-13 06:04:46.656107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.123 ms 00:17:55.164 [2024-07-13 06:04:46.656118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.164 [2024-07-13 06:04:46.659089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.164 [2024-07-13 06:04:46.659175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:55.164 [2024-07-13 06:04:46.659194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.916 ms 00:17:55.164 [2024-07-13 06:04:46.659205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.164 [2024-07-13 06:04:46.659345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.164 [2024-07-13 06:04:46.659365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:55.164 [2024-07-13 06:04:46.659392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:17:55.164 [2024-07-13 06:04:46.659404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.164 [2024-07-13 06:04:46.661150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.164 [2024-07-13 06:04:46.661240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:55.164 [2024-07-13 06:04:46.661255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.722 ms 00:17:55.164 [2024-07-13 06:04:46.661266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.164 [2024-07-13 06:04:46.662527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.164 [2024-07-13 06:04:46.662580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:55.164 [2024-07-13 06:04:46.662594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.220 ms 00:17:55.164 [2024-07-13 06:04:46.662604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.164 [2024-07-13 06:04:46.663828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.164 [2024-07-13 06:04:46.663868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:55.164 [2024-07-13 06:04:46.663883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.184 ms 00:17:55.164 [2024-07-13 06:04:46.663893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.164 [2024-07-13 06:04:46.665197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.164 [2024-07-13 06:04:46.665228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:55.164 [2024-07-13 06:04:46.665242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.219 ms 00:17:55.164 [2024-07-13 06:04:46.665253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.164 [2024-07-13 06:04:46.665293] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:55.164 [2024-07-13 06:04:46.665317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.665991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:55.164 [2024-07-13 06:04:46.666469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:55.165 [2024-07-13 06:04:46.666481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:55.165 [2024-07-13 06:04:46.666492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:55.165 [2024-07-13 06:04:46.666508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:55.165 [2024-07-13 06:04:46.666534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:55.165 [2024-07-13 06:04:46.666562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:55.165 [2024-07-13 06:04:46.666583] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:55.165 [2024-07-13 06:04:46.666595] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1a49c5a3-2b59-4791-9c81-0428a1736fe5 00:17:55.165 [2024-07-13 06:04:46.666607] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:55.165 [2024-07-13 06:04:46.666618] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:55.165 [2024-07-13 06:04:46.666628] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:55.165 [2024-07-13 06:04:46.666640] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:55.165 [2024-07-13 06:04:46.666650] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:55.165 [2024-07-13 06:04:46.666666] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:55.165 [2024-07-13 06:04:46.666677] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:55.165 [2024-07-13 06:04:46.666688] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:55.165 [2024-07-13 06:04:46.666698] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:55.165 [2024-07-13 06:04:46.666709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.165 [2024-07-13 06:04:46.666721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:55.165 [2024-07-13 06:04:46.666737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.417 ms 00:17:55.165 [2024-07-13 06:04:46.666747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.165 [2024-07-13 06:04:46.668519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.165 [2024-07-13 06:04:46.668682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:55.165 [2024-07-13 06:04:46.668801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.746 ms 00:17:55.165 [2024-07-13 06:04:46.668946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.165 [2024-07-13 06:04:46.669098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.165 [2024-07-13 06:04:46.669215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:55.165 [2024-07-13 06:04:46.669415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:17:55.165 [2024-07-13 06:04:46.669466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.165 [2024-07-13 06:04:46.674215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.165 [2024-07-13 06:04:46.674373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:55.165 [2024-07-13 06:04:46.674494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.165 [2024-07-13 06:04:46.674617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.165 [2024-07-13 06:04:46.674829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.165 [2024-07-13 06:04:46.674963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:55.165 [2024-07-13 06:04:46.675077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.165 [2024-07-13 06:04:46.675214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.165 [2024-07-13 06:04:46.675411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.165 [2024-07-13 06:04:46.675554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:55.165 [2024-07-13 06:04:46.675669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.165 [2024-07-13 06:04:46.675779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.165 [2024-07-13 06:04:46.675865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.165 [2024-07-13 06:04:46.675941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:55.165 [2024-07-13 06:04:46.676045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.165 [2024-07-13 06:04:46.676067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.165 [2024-07-13 06:04:46.684027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.165 [2024-07-13 06:04:46.684264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:55.165 [2024-07-13 06:04:46.684391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.165 [2024-07-13 06:04:46.684453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.165 [2024-07-13 06:04:46.691054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.165 [2024-07-13 06:04:46.691287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:55.165 [2024-07-13 06:04:46.691437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.165 [2024-07-13 06:04:46.691492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.165 [2024-07-13 06:04:46.691575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.165 [2024-07-13 06:04:46.691702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:55.165 [2024-07-13 06:04:46.691812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.165 [2024-07-13 06:04:46.691865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.165 [2024-07-13 06:04:46.691935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.165 [2024-07-13 06:04:46.692007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:55.165 [2024-07-13 06:04:46.692053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.165 [2024-07-13 06:04:46.692105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.165 [2024-07-13 06:04:46.692258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.165 [2024-07-13 06:04:46.692293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:55.165 [2024-07-13 06:04:46.692309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.165 [2024-07-13 06:04:46.692332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.165 [2024-07-13 06:04:46.692391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.165 [2024-07-13 06:04:46.692414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:55.165 [2024-07-13 06:04:46.692427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.165 [2024-07-13 06:04:46.692438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.165 [2024-07-13 06:04:46.692486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.165 [2024-07-13 06:04:46.692502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:55.165 [2024-07-13 06:04:46.692513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.165 [2024-07-13 06:04:46.692525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.165 [2024-07-13 06:04:46.692579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.165 [2024-07-13 06:04:46.692601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:55.165 [2024-07-13 06:04:46.692625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.165 [2024-07-13 06:04:46.692650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.165 [2024-07-13 06:04:46.692812] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.146 ms, result 0 00:17:55.424 00:17:55.424 00:17:55.424 06:04:47 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=90812 00:17:55.424 06:04:47 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:55.424 06:04:47 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 90812 00:17:55.424 06:04:47 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 90812 ']' 00:17:55.424 06:04:47 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.424 06:04:47 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:55.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.424 06:04:47 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.424 06:04:47 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:55.424 06:04:47 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:55.424 [2024-07-13 06:04:47.134914] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:55.424 [2024-07-13 06:04:47.135082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90812 ] 00:17:55.682 [2024-07-13 06:04:47.278406] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.682 [2024-07-13 06:04:47.314466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.613 06:04:48 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:56.613 06:04:48 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:17:56.613 06:04:48 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:56.613 [2024-07-13 06:04:48.287484] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:56.613 [2024-07-13 06:04:48.287566] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:56.872 [2024-07-13 06:04:48.464323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.872 [2024-07-13 06:04:48.464377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:56.872 [2024-07-13 06:04:48.464418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:56.872 [2024-07-13 06:04:48.464430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.872 [2024-07-13 06:04:48.467458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.872 [2024-07-13 06:04:48.467498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:56.872 [2024-07-13 06:04:48.467568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.998 ms 00:17:56.872 [2024-07-13 06:04:48.467581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.872 [2024-07-13 06:04:48.467701] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:56.872 [2024-07-13 06:04:48.467992] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:56.872 [2024-07-13 06:04:48.468024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.872 [2024-07-13 06:04:48.468037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:56.872 [2024-07-13 06:04:48.468056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:17:56.873 [2024-07-13 06:04:48.468078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.873 [2024-07-13 06:04:48.469501] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:56.873 [2024-07-13 06:04:48.471680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.873 [2024-07-13 06:04:48.471745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:56.873 [2024-07-13 06:04:48.471763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.191 ms 00:17:56.873 [2024-07-13 06:04:48.471793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.873 [2024-07-13 06:04:48.471884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.873 [2024-07-13 06:04:48.471909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:56.873 [2024-07-13 06:04:48.471923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:56.873 [2024-07-13 06:04:48.471942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.873 [2024-07-13 06:04:48.476441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.873 [2024-07-13 06:04:48.476500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:56.873 [2024-07-13 06:04:48.476516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.409 ms 00:17:56.873 [2024-07-13 06:04:48.476532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.873 [2024-07-13 06:04:48.476709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.873 [2024-07-13 06:04:48.476736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:56.873 [2024-07-13 06:04:48.476751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:17:56.873 [2024-07-13 06:04:48.476775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.873 [2024-07-13 06:04:48.476835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.873 [2024-07-13 06:04:48.476857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:56.873 [2024-07-13 06:04:48.476871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:56.873 [2024-07-13 06:04:48.476887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.873 [2024-07-13 06:04:48.476936] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:56.873 [2024-07-13 06:04:48.478424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.873 [2024-07-13 06:04:48.478484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:56.873 [2024-07-13 06:04:48.478534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.486 ms 00:17:56.873 [2024-07-13 06:04:48.478556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.873 [2024-07-13 06:04:48.478638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.873 [2024-07-13 06:04:48.478653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:56.873 [2024-07-13 06:04:48.478680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:56.873 [2024-07-13 06:04:48.478691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.873 [2024-07-13 06:04:48.478721] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:56.873 [2024-07-13 06:04:48.478748] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:56.873 [2024-07-13 06:04:48.478798] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:56.873 [2024-07-13 06:04:48.478827] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:17:56.873 [2024-07-13 06:04:48.478942] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:56.873 [2024-07-13 06:04:48.478957] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:56.873 [2024-07-13 06:04:48.478972] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:56.873 [2024-07-13 06:04:48.478993] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:56.873 [2024-07-13 06:04:48.479009] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:56.873 [2024-07-13 06:04:48.479021] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:56.873 [2024-07-13 06:04:48.479035] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:56.873 [2024-07-13 06:04:48.479046] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:56.873 [2024-07-13 06:04:48.479068] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:56.873 [2024-07-13 06:04:48.479079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.873 [2024-07-13 06:04:48.479091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:56.873 [2024-07-13 06:04:48.479103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:17:56.873 [2024-07-13 06:04:48.479125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.873 [2024-07-13 06:04:48.479267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.873 [2024-07-13 06:04:48.479297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:56.873 [2024-07-13 06:04:48.479310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:17:56.873 [2024-07-13 06:04:48.479322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.873 [2024-07-13 06:04:48.479432] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:56.873 [2024-07-13 06:04:48.479453] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:56.873 [2024-07-13 06:04:48.479482] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:56.873 [2024-07-13 06:04:48.479496] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.873 [2024-07-13 06:04:48.479508] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:56.873 [2024-07-13 06:04:48.479523] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:56.873 [2024-07-13 06:04:48.479534] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:56.873 [2024-07-13 06:04:48.479546] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:56.873 [2024-07-13 06:04:48.479558] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:56.873 [2024-07-13 06:04:48.479570] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:56.873 [2024-07-13 06:04:48.479581] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:56.873 [2024-07-13 06:04:48.479593] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:56.873 [2024-07-13 06:04:48.479604] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:56.873 [2024-07-13 06:04:48.479617] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:56.873 [2024-07-13 06:04:48.479628] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:56.873 [2024-07-13 06:04:48.479640] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.873 [2024-07-13 06:04:48.479651] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:56.873 [2024-07-13 06:04:48.479664] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:56.873 [2024-07-13 06:04:48.479675] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.873 [2024-07-13 06:04:48.479690] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:56.873 [2024-07-13 06:04:48.479702] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:56.873 [2024-07-13 06:04:48.479717] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:56.873 [2024-07-13 06:04:48.479728] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:56.873 [2024-07-13 06:04:48.479741] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:56.873 [2024-07-13 06:04:48.479751] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:56.873 [2024-07-13 06:04:48.479764] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:56.873 [2024-07-13 06:04:48.479775] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:56.873 [2024-07-13 06:04:48.479787] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:56.873 [2024-07-13 06:04:48.479797] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:56.873 [2024-07-13 06:04:48.479810] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:56.873 [2024-07-13 06:04:48.479820] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:56.873 [2024-07-13 06:04:48.479832] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:56.873 [2024-07-13 06:04:48.479843] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:56.873 [2024-07-13 06:04:48.479855] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:56.873 [2024-07-13 06:04:48.479866] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:56.873 [2024-07-13 06:04:48.479878] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:56.873 [2024-07-13 06:04:48.479888] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:56.873 [2024-07-13 06:04:48.479911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:56.873 [2024-07-13 06:04:48.479924] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:56.873 [2024-07-13 06:04:48.479940] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.873 [2024-07-13 06:04:48.479951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:56.873 [2024-07-13 06:04:48.479967] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:56.873 [2024-07-13 06:04:48.479979] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.873 [2024-07-13 06:04:48.479993] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:56.873 [2024-07-13 06:04:48.480019] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:56.873 [2024-07-13 06:04:48.480036] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:56.873 [2024-07-13 06:04:48.480059] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.873 [2024-07-13 06:04:48.480077] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:56.873 [2024-07-13 06:04:48.480090] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:56.874 [2024-07-13 06:04:48.480106] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:56.874 [2024-07-13 06:04:48.480118] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:56.874 [2024-07-13 06:04:48.480160] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:56.874 [2024-07-13 06:04:48.480176] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:56.874 [2024-07-13 06:04:48.480199] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:56.874 [2024-07-13 06:04:48.480215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:56.874 [2024-07-13 06:04:48.480233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:56.874 [2024-07-13 06:04:48.480246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:56.874 [2024-07-13 06:04:48.480262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:56.874 [2024-07-13 06:04:48.480275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:56.874 [2024-07-13 06:04:48.480291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:56.874 [2024-07-13 06:04:48.480304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:56.874 [2024-07-13 06:04:48.480320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:56.874 [2024-07-13 06:04:48.480333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:56.874 [2024-07-13 06:04:48.480349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:56.874 [2024-07-13 06:04:48.480362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:56.874 [2024-07-13 06:04:48.480378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:56.874 [2024-07-13 06:04:48.480390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:56.874 [2024-07-13 06:04:48.480407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:56.874 [2024-07-13 06:04:48.480420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:56.874 [2024-07-13 06:04:48.480441] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:56.874 [2024-07-13 06:04:48.480460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:56.874 [2024-07-13 06:04:48.480478] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:56.874 [2024-07-13 06:04:48.480491] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:56.874 [2024-07-13 06:04:48.480507] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:56.874 [2024-07-13 06:04:48.480520] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:56.874 [2024-07-13 06:04:48.480538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.874 [2024-07-13 06:04:48.480551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:56.874 [2024-07-13 06:04:48.480568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.163 ms 00:17:56.874 [2024-07-13 06:04:48.480593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.874 [2024-07-13 06:04:48.488657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.874 [2024-07-13 06:04:48.488708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:56.874 [2024-07-13 06:04:48.488765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.929 ms 00:17:56.874 [2024-07-13 06:04:48.488777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.874 [2024-07-13 06:04:48.488951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.874 [2024-07-13 06:04:48.488980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:56.874 [2024-07-13 06:04:48.489003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:56.874 [2024-07-13 06:04:48.489015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.874 [2024-07-13 06:04:48.497119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.874 [2024-07-13 06:04:48.497204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:56.874 [2024-07-13 06:04:48.497249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.067 ms 00:17:56.874 [2024-07-13 06:04:48.497263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.874 [2024-07-13 06:04:48.497373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.874 [2024-07-13 06:04:48.497403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:56.874 [2024-07-13 06:04:48.497423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:56.874 [2024-07-13 06:04:48.497436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.874 [2024-07-13 06:04:48.497864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.874 [2024-07-13 06:04:48.497899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:56.874 [2024-07-13 06:04:48.497923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:17:56.874 [2024-07-13 06:04:48.497937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.874 [2024-07-13 06:04:48.498114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.874 [2024-07-13 06:04:48.498157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:56.874 [2024-07-13 06:04:48.498185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:17:56.874 [2024-07-13 06:04:48.498199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.874 [2024-07-13 06:04:48.503937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.874 [2024-07-13 06:04:48.503973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:56.874 [2024-07-13 06:04:48.504012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.670 ms 00:17:56.874 [2024-07-13 06:04:48.504024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.874 [2024-07-13 06:04:48.506513] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:56.874 [2024-07-13 06:04:48.506570] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:56.874 [2024-07-13 06:04:48.506625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.874 [2024-07-13 06:04:48.506639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:56.874 [2024-07-13 06:04:48.506657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.423 ms 00:17:56.874 [2024-07-13 06:04:48.506669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.874 [2024-07-13 06:04:48.523230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.874 [2024-07-13 06:04:48.523281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:56.874 [2024-07-13 06:04:48.523322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.476 ms 00:17:56.874 [2024-07-13 06:04:48.523336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.874 [2024-07-13 06:04:48.525155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.874 [2024-07-13 06:04:48.525218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:56.874 [2024-07-13 06:04:48.525242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.687 ms 00:17:56.874 [2024-07-13 06:04:48.525255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.874 [2024-07-13 06:04:48.526873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.874 [2024-07-13 06:04:48.526943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:56.874 [2024-07-13 06:04:48.526965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.556 ms 00:17:56.874 [2024-07-13 06:04:48.526977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.874 [2024-07-13 06:04:48.527428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.874 [2024-07-13 06:04:48.527457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:56.874 [2024-07-13 06:04:48.527495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:17:56.874 [2024-07-13 06:04:48.527509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.874 [2024-07-13 06:04:48.552996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.874 [2024-07-13 06:04:48.553078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:56.874 [2024-07-13 06:04:48.553122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.417 ms 00:17:56.874 [2024-07-13 06:04:48.553196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.874 [2024-07-13 06:04:48.561223] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:56.874 [2024-07-13 06:04:48.574258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.874 [2024-07-13 06:04:48.574362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:56.874 [2024-07-13 06:04:48.574384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.903 ms 00:17:56.874 [2024-07-13 06:04:48.574401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.874 [2024-07-13 06:04:48.574530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.874 [2024-07-13 06:04:48.574572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:56.874 [2024-07-13 06:04:48.574592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:56.874 [2024-07-13 06:04:48.574609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.874 [2024-07-13 06:04:48.574690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.874 [2024-07-13 06:04:48.574731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:56.874 [2024-07-13 06:04:48.574745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:56.874 [2024-07-13 06:04:48.574761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.874 [2024-07-13 06:04:48.574795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.874 [2024-07-13 06:04:48.574833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:56.874 [2024-07-13 06:04:48.574847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:56.874 [2024-07-13 06:04:48.574881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.874 [2024-07-13 06:04:48.574939] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:56.874 [2024-07-13 06:04:48.574962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.874 [2024-07-13 06:04:48.574974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:56.874 [2024-07-13 06:04:48.574990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:56.874 [2024-07-13 06:04:48.575002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.874 [2024-07-13 06:04:48.578754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.874 [2024-07-13 06:04:48.578795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:56.874 [2024-07-13 06:04:48.578836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.713 ms 00:17:56.874 [2024-07-13 06:04:48.578849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.875 [2024-07-13 06:04:48.578981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.875 [2024-07-13 06:04:48.579001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:56.875 [2024-07-13 06:04:48.579019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:56.875 [2024-07-13 06:04:48.579042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.875 [2024-07-13 06:04:48.580184] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:56.875 [2024-07-13 06:04:48.581475] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 115.409 ms, result 0 00:17:56.875 [2024-07-13 06:04:48.582503] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:57.132 Some configs were skipped because the RPC state that can call them passed over. 00:17:57.132 06:04:48 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:57.132 [2024-07-13 06:04:48.857482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.132 [2024-07-13 06:04:48.857724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:57.132 [2024-07-13 06:04:48.857855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.573 ms 00:17:57.132 [2024-07-13 06:04:48.857921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.132 [2024-07-13 06:04:48.858025] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.124 ms, result 0 00:17:57.388 true 00:17:57.388 06:04:48 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:57.646 [2024-07-13 06:04:49.117376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.646 [2024-07-13 06:04:49.117572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:57.646 [2024-07-13 06:04:49.117718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.143 ms 00:17:57.646 [2024-07-13 06:04:49.117775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.646 [2024-07-13 06:04:49.117881] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.654 ms, result 0 00:17:57.646 true 00:17:57.646 06:04:49 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 90812 00:17:57.646 06:04:49 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 90812 ']' 00:17:57.646 06:04:49 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 90812 00:17:57.646 06:04:49 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:17:57.646 06:04:49 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:57.646 06:04:49 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90812 00:17:57.646 killing process with pid 90812 00:17:57.646 06:04:49 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:57.646 06:04:49 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:57.646 06:04:49 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90812' 00:17:57.646 06:04:49 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 90812 00:17:57.646 06:04:49 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 90812 00:17:57.646 [2024-07-13 06:04:49.262759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.646 [2024-07-13 06:04:49.262843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:57.646 [2024-07-13 06:04:49.262872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:57.646 [2024-07-13 06:04:49.262886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.646 [2024-07-13 06:04:49.262919] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:57.646 [2024-07-13 06:04:49.263700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.646 [2024-07-13 06:04:49.263933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:57.646 [2024-07-13 06:04:49.264094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.750 ms 00:17:57.646 [2024-07-13 06:04:49.264120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.646 [2024-07-13 06:04:49.264489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.646 [2024-07-13 06:04:49.264551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:57.646 [2024-07-13 06:04:49.264569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:17:57.646 [2024-07-13 06:04:49.264582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.646 [2024-07-13 06:04:49.268763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.646 [2024-07-13 06:04:49.268807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:57.646 [2024-07-13 06:04:49.268827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.148 ms 00:17:57.646 [2024-07-13 06:04:49.268840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.646 [2024-07-13 06:04:49.276188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.646 [2024-07-13 06:04:49.276220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:57.646 [2024-07-13 06:04:49.276255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.280 ms 00:17:57.647 [2024-07-13 06:04:49.276266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.647 [2024-07-13 06:04:49.277505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.647 [2024-07-13 06:04:49.277586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:57.647 [2024-07-13 06:04:49.277620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.148 ms 00:17:57.647 [2024-07-13 06:04:49.277631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.647 [2024-07-13 06:04:49.280624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.647 [2024-07-13 06:04:49.280662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:57.647 [2024-07-13 06:04:49.280696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.929 ms 00:17:57.647 [2024-07-13 06:04:49.280718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.647 [2024-07-13 06:04:49.280859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.647 [2024-07-13 06:04:49.280878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:57.647 [2024-07-13 06:04:49.280914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:17:57.647 [2024-07-13 06:04:49.280927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.647 [2024-07-13 06:04:49.282879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.647 [2024-07-13 06:04:49.282930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:57.647 [2024-07-13 06:04:49.282967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.918 ms 00:17:57.647 [2024-07-13 06:04:49.282979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.647 [2024-07-13 06:04:49.284420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.647 [2024-07-13 06:04:49.284456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:57.647 [2024-07-13 06:04:49.284477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.384 ms 00:17:57.647 [2024-07-13 06:04:49.284489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.647 [2024-07-13 06:04:49.285688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.647 [2024-07-13 06:04:49.285725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:57.647 [2024-07-13 06:04:49.285763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.114 ms 00:17:57.647 [2024-07-13 06:04:49.285776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.647 [2024-07-13 06:04:49.287017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.647 [2024-07-13 06:04:49.287054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:57.647 [2024-07-13 06:04:49.287092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.107 ms 00:17:57.647 [2024-07-13 06:04:49.287105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.647 [2024-07-13 06:04:49.287198] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:57.647 [2024-07-13 06:04:49.287224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.287999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:57.647 [2024-07-13 06:04:49.288422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:57.648 [2024-07-13 06:04:49.288434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:57.648 [2024-07-13 06:04:49.288447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:57.648 [2024-07-13 06:04:49.288459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:57.648 [2024-07-13 06:04:49.288488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:57.648 [2024-07-13 06:04:49.288500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:57.648 [2024-07-13 06:04:49.288516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:57.648 [2024-07-13 06:04:49.288528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:57.648 [2024-07-13 06:04:49.288543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:57.648 [2024-07-13 06:04:49.288554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:57.648 [2024-07-13 06:04:49.288568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:57.648 [2024-07-13 06:04:49.288580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:57.648 [2024-07-13 06:04:49.288594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:57.648 [2024-07-13 06:04:49.288606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:57.648 [2024-07-13 06:04:49.288621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:57.648 [2024-07-13 06:04:49.288634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:57.648 [2024-07-13 06:04:49.288647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:57.648 [2024-07-13 06:04:49.288659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:57.648 [2024-07-13 06:04:49.288673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:57.648 [2024-07-13 06:04:49.288685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:57.648 [2024-07-13 06:04:49.288699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:57.648 [2024-07-13 06:04:49.288711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:57.648 [2024-07-13 06:04:49.288728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:57.648 [2024-07-13 06:04:49.288749] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:57.648 [2024-07-13 06:04:49.288767] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1a49c5a3-2b59-4791-9c81-0428a1736fe5 00:17:57.648 [2024-07-13 06:04:49.288781] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:57.648 [2024-07-13 06:04:49.288797] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:57.648 [2024-07-13 06:04:49.288815] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:57.648 [2024-07-13 06:04:49.288832] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:57.648 [2024-07-13 06:04:49.288844] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:57.648 [2024-07-13 06:04:49.288860] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:57.648 [2024-07-13 06:04:49.288871] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:57.648 [2024-07-13 06:04:49.288886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:57.648 [2024-07-13 06:04:49.288898] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:57.648 [2024-07-13 06:04:49.288914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.648 [2024-07-13 06:04:49.288927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:57.648 [2024-07-13 06:04:49.288944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.721 ms 00:17:57.648 [2024-07-13 06:04:49.288956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.648 [2024-07-13 06:04:49.290410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.648 [2024-07-13 06:04:49.290437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:57.648 [2024-07-13 06:04:49.290470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.404 ms 00:17:57.648 [2024-07-13 06:04:49.290482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.648 [2024-07-13 06:04:49.290598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.648 [2024-07-13 06:04:49.290617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:57.648 [2024-07-13 06:04:49.290635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:57.648 [2024-07-13 06:04:49.290648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.648 [2024-07-13 06:04:49.296052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.648 [2024-07-13 06:04:49.296097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:57.648 [2024-07-13 06:04:49.296146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.648 [2024-07-13 06:04:49.296160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.648 [2024-07-13 06:04:49.296245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.648 [2024-07-13 06:04:49.296262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:57.648 [2024-07-13 06:04:49.296279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.648 [2024-07-13 06:04:49.296290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.648 [2024-07-13 06:04:49.296362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.648 [2024-07-13 06:04:49.296379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:57.648 [2024-07-13 06:04:49.296397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.648 [2024-07-13 06:04:49.296408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.648 [2024-07-13 06:04:49.296440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.648 [2024-07-13 06:04:49.296454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:57.648 [2024-07-13 06:04:49.296470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.648 [2024-07-13 06:04:49.296481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.648 [2024-07-13 06:04:49.304812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.648 [2024-07-13 06:04:49.304883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:57.648 [2024-07-13 06:04:49.304919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.648 [2024-07-13 06:04:49.304931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.648 [2024-07-13 06:04:49.311615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.648 [2024-07-13 06:04:49.311661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:57.648 [2024-07-13 06:04:49.311697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.648 [2024-07-13 06:04:49.311709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.648 [2024-07-13 06:04:49.311777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.648 [2024-07-13 06:04:49.311796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:57.648 [2024-07-13 06:04:49.311811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.648 [2024-07-13 06:04:49.311822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.648 [2024-07-13 06:04:49.311874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.648 [2024-07-13 06:04:49.311913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:57.648 [2024-07-13 06:04:49.311928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.648 [2024-07-13 06:04:49.311939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.648 [2024-07-13 06:04:49.312033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.648 [2024-07-13 06:04:49.312051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:57.648 [2024-07-13 06:04:49.312068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.648 [2024-07-13 06:04:49.312079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.648 [2024-07-13 06:04:49.312130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.648 [2024-07-13 06:04:49.312147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:57.648 [2024-07-13 06:04:49.312281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.648 [2024-07-13 06:04:49.312315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.648 [2024-07-13 06:04:49.312374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.648 [2024-07-13 06:04:49.312399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:57.648 [2024-07-13 06:04:49.312414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.648 [2024-07-13 06:04:49.312428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.648 [2024-07-13 06:04:49.312488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.648 [2024-07-13 06:04:49.312505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:57.648 [2024-07-13 06:04:49.312519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.648 [2024-07-13 06:04:49.312531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.648 [2024-07-13 06:04:49.312701] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 49.903 ms, result 0 00:17:57.905 06:04:49 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:57.905 06:04:49 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:57.905 [2024-07-13 06:04:49.613564] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:57.905 [2024-07-13 06:04:49.613763] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90848 ] 00:17:58.163 [2024-07-13 06:04:49.761870] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.164 [2024-07-13 06:04:49.796902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.164 [2024-07-13 06:04:49.879428] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:58.164 [2024-07-13 06:04:49.879531] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:58.424 [2024-07-13 06:04:50.037342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.424 [2024-07-13 06:04:50.037406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:58.424 [2024-07-13 06:04:50.037437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:58.424 [2024-07-13 06:04:50.037450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.424 [2024-07-13 06:04:50.040183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.424 [2024-07-13 06:04:50.040233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:58.424 [2024-07-13 06:04:50.040250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.703 ms 00:17:58.424 [2024-07-13 06:04:50.040261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.424 [2024-07-13 06:04:50.040384] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:58.424 [2024-07-13 06:04:50.040687] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:58.424 [2024-07-13 06:04:50.040721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.424 [2024-07-13 06:04:50.040734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:58.424 [2024-07-13 06:04:50.040756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:17:58.424 [2024-07-13 06:04:50.040767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.424 [2024-07-13 06:04:50.042110] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:58.424 [2024-07-13 06:04:50.044306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.424 [2024-07-13 06:04:50.044345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:58.424 [2024-07-13 06:04:50.044372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.198 ms 00:17:58.424 [2024-07-13 06:04:50.044383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.424 [2024-07-13 06:04:50.044460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.424 [2024-07-13 06:04:50.044479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:58.424 [2024-07-13 06:04:50.044491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:58.424 [2024-07-13 06:04:50.044514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.424 [2024-07-13 06:04:50.048656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.424 [2024-07-13 06:04:50.048694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:58.424 [2024-07-13 06:04:50.048725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.071 ms 00:17:58.424 [2024-07-13 06:04:50.048736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.424 [2024-07-13 06:04:50.048872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.424 [2024-07-13 06:04:50.048892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:58.424 [2024-07-13 06:04:50.048923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:17:58.424 [2024-07-13 06:04:50.048938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.424 [2024-07-13 06:04:50.048980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.424 [2024-07-13 06:04:50.048995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:58.424 [2024-07-13 06:04:50.049006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:58.424 [2024-07-13 06:04:50.049016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.424 [2024-07-13 06:04:50.049045] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:58.424 [2024-07-13 06:04:50.050438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.424 [2024-07-13 06:04:50.050476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:58.424 [2024-07-13 06:04:50.050500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.401 ms 00:17:58.424 [2024-07-13 06:04:50.050511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.424 [2024-07-13 06:04:50.050585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.424 [2024-07-13 06:04:50.050601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:58.424 [2024-07-13 06:04:50.050613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:58.424 [2024-07-13 06:04:50.050623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.424 [2024-07-13 06:04:50.050650] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:58.424 [2024-07-13 06:04:50.050675] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:58.424 [2024-07-13 06:04:50.050732] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:58.424 [2024-07-13 06:04:50.050754] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:17:58.424 [2024-07-13 06:04:50.050861] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:58.424 [2024-07-13 06:04:50.050876] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:58.424 [2024-07-13 06:04:50.050890] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:58.424 [2024-07-13 06:04:50.050919] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:58.424 [2024-07-13 06:04:50.050931] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:58.424 [2024-07-13 06:04:50.050942] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:58.424 [2024-07-13 06:04:50.050952] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:58.424 [2024-07-13 06:04:50.050967] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:58.424 [2024-07-13 06:04:50.050977] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:58.424 [2024-07-13 06:04:50.050988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.424 [2024-07-13 06:04:50.050999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:58.424 [2024-07-13 06:04:50.051019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:17:58.424 [2024-07-13 06:04:50.051032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.424 [2024-07-13 06:04:50.051121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.424 [2024-07-13 06:04:50.051136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:58.424 [2024-07-13 06:04:50.051163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:58.424 [2024-07-13 06:04:50.051191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.424 [2024-07-13 06:04:50.051327] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:58.424 [2024-07-13 06:04:50.051344] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:58.424 [2024-07-13 06:04:50.051367] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:58.424 [2024-07-13 06:04:50.051379] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.424 [2024-07-13 06:04:50.051399] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:58.424 [2024-07-13 06:04:50.051410] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:58.424 [2024-07-13 06:04:50.051420] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:58.424 [2024-07-13 06:04:50.051430] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:58.424 [2024-07-13 06:04:50.051440] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:58.424 [2024-07-13 06:04:50.051450] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:58.424 [2024-07-13 06:04:50.051459] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:58.424 [2024-07-13 06:04:50.051489] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:58.424 [2024-07-13 06:04:50.051499] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:58.424 [2024-07-13 06:04:50.051510] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:58.424 [2024-07-13 06:04:50.051520] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:58.424 [2024-07-13 06:04:50.051531] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.424 [2024-07-13 06:04:50.051541] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:58.424 [2024-07-13 06:04:50.051551] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:58.424 [2024-07-13 06:04:50.051562] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.424 [2024-07-13 06:04:50.051572] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:58.424 [2024-07-13 06:04:50.051583] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:58.424 [2024-07-13 06:04:50.051593] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:58.424 [2024-07-13 06:04:50.051603] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:58.424 [2024-07-13 06:04:50.051614] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:58.424 [2024-07-13 06:04:50.051624] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:58.424 [2024-07-13 06:04:50.051633] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:58.424 [2024-07-13 06:04:50.051644] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:58.424 [2024-07-13 06:04:50.051660] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:58.424 [2024-07-13 06:04:50.051671] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:58.424 [2024-07-13 06:04:50.051681] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:58.424 [2024-07-13 06:04:50.051691] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:58.424 [2024-07-13 06:04:50.051701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:58.424 [2024-07-13 06:04:50.051711] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:58.424 [2024-07-13 06:04:50.051722] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:58.424 [2024-07-13 06:04:50.051732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:58.424 [2024-07-13 06:04:50.051742] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:58.424 [2024-07-13 06:04:50.051752] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:58.424 [2024-07-13 06:04:50.051762] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:58.424 [2024-07-13 06:04:50.051772] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:58.424 [2024-07-13 06:04:50.051782] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.424 [2024-07-13 06:04:50.051793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:58.425 [2024-07-13 06:04:50.051803] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:58.425 [2024-07-13 06:04:50.051812] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.425 [2024-07-13 06:04:50.051825] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:58.425 [2024-07-13 06:04:50.051836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:58.425 [2024-07-13 06:04:50.051856] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:58.425 [2024-07-13 06:04:50.051867] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.425 [2024-07-13 06:04:50.051879] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:58.425 [2024-07-13 06:04:50.051889] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:58.425 [2024-07-13 06:04:50.051900] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:58.425 [2024-07-13 06:04:50.051911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:58.425 [2024-07-13 06:04:50.051921] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:58.425 [2024-07-13 06:04:50.051932] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:58.425 [2024-07-13 06:04:50.051944] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:58.425 [2024-07-13 06:04:50.051966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:58.425 [2024-07-13 06:04:50.051982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:58.425 [2024-07-13 06:04:50.051994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:58.425 [2024-07-13 06:04:50.052005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:58.425 [2024-07-13 06:04:50.052016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:58.425 [2024-07-13 06:04:50.052030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:58.425 [2024-07-13 06:04:50.052041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:58.425 [2024-07-13 06:04:50.052053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:58.425 [2024-07-13 06:04:50.052064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:58.425 [2024-07-13 06:04:50.052075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:58.425 [2024-07-13 06:04:50.052086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:58.425 [2024-07-13 06:04:50.052098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:58.425 [2024-07-13 06:04:50.052109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:58.425 [2024-07-13 06:04:50.052121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:58.425 [2024-07-13 06:04:50.052132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:58.425 [2024-07-13 06:04:50.052154] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:58.425 [2024-07-13 06:04:50.052206] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:58.425 [2024-07-13 06:04:50.052219] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:58.425 [2024-07-13 06:04:50.052230] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:58.425 [2024-07-13 06:04:50.052242] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:58.425 [2024-07-13 06:04:50.052253] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:58.425 [2024-07-13 06:04:50.052269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.425 [2024-07-13 06:04:50.052281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:58.425 [2024-07-13 06:04:50.052293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.003 ms 00:17:58.425 [2024-07-13 06:04:50.052304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.425 [2024-07-13 06:04:50.071984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.425 [2024-07-13 06:04:50.072045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:58.425 [2024-07-13 06:04:50.072085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.579 ms 00:17:58.425 [2024-07-13 06:04:50.072102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.425 [2024-07-13 06:04:50.072313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.425 [2024-07-13 06:04:50.072350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:58.425 [2024-07-13 06:04:50.072366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:58.425 [2024-07-13 06:04:50.072391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.425 [2024-07-13 06:04:50.080522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.425 [2024-07-13 06:04:50.080567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:58.425 [2024-07-13 06:04:50.080584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.094 ms 00:17:58.425 [2024-07-13 06:04:50.080602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.425 [2024-07-13 06:04:50.080676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.425 [2024-07-13 06:04:50.080694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:58.425 [2024-07-13 06:04:50.080707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:58.425 [2024-07-13 06:04:50.080719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.425 [2024-07-13 06:04:50.081040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.425 [2024-07-13 06:04:50.081074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:58.425 [2024-07-13 06:04:50.081090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:17:58.425 [2024-07-13 06:04:50.081102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.425 [2024-07-13 06:04:50.081290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.425 [2024-07-13 06:04:50.081320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:58.425 [2024-07-13 06:04:50.081343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:17:58.425 [2024-07-13 06:04:50.081355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.425 [2024-07-13 06:04:50.086683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.425 [2024-07-13 06:04:50.086732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:58.425 [2024-07-13 06:04:50.086764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.296 ms 00:17:58.425 [2024-07-13 06:04:50.086786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.425 [2024-07-13 06:04:50.089280] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:58.425 [2024-07-13 06:04:50.089329] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:58.425 [2024-07-13 06:04:50.089349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.425 [2024-07-13 06:04:50.089362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:58.425 [2024-07-13 06:04:50.089374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.391 ms 00:17:58.425 [2024-07-13 06:04:50.089386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.425 [2024-07-13 06:04:50.105769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.425 [2024-07-13 06:04:50.105817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:58.425 [2024-07-13 06:04:50.105863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.324 ms 00:17:58.425 [2024-07-13 06:04:50.105881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.425 [2024-07-13 06:04:50.107825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.425 [2024-07-13 06:04:50.107864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:58.425 [2024-07-13 06:04:50.107895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.837 ms 00:17:58.425 [2024-07-13 06:04:50.107905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.425 [2024-07-13 06:04:50.109669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.425 [2024-07-13 06:04:50.109707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:58.425 [2024-07-13 06:04:50.109738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.699 ms 00:17:58.425 [2024-07-13 06:04:50.109749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.425 [2024-07-13 06:04:50.110191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.425 [2024-07-13 06:04:50.110216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:58.425 [2024-07-13 06:04:50.110231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:17:58.425 [2024-07-13 06:04:50.110242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.425 [2024-07-13 06:04:50.126833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.425 [2024-07-13 06:04:50.126903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:58.425 [2024-07-13 06:04:50.126952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.532 ms 00:17:58.425 [2024-07-13 06:04:50.126978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.425 [2024-07-13 06:04:50.135110] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:58.683 [2024-07-13 06:04:50.148744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.683 [2024-07-13 06:04:50.148821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:58.683 [2024-07-13 06:04:50.148857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.663 ms 00:17:58.683 [2024-07-13 06:04:50.148868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.683 [2024-07-13 06:04:50.149015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.683 [2024-07-13 06:04:50.149037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:58.683 [2024-07-13 06:04:50.149050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:58.683 [2024-07-13 06:04:50.149070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.683 [2024-07-13 06:04:50.149151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.683 [2024-07-13 06:04:50.149246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:58.683 [2024-07-13 06:04:50.149260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:58.683 [2024-07-13 06:04:50.149271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.683 [2024-07-13 06:04:50.149338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.683 [2024-07-13 06:04:50.149373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:58.683 [2024-07-13 06:04:50.149389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:58.683 [2024-07-13 06:04:50.149401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.683 [2024-07-13 06:04:50.149442] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:58.683 [2024-07-13 06:04:50.149459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.683 [2024-07-13 06:04:50.149470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:58.683 [2024-07-13 06:04:50.149491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:58.683 [2024-07-13 06:04:50.149502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.683 [2024-07-13 06:04:50.153010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.683 [2024-07-13 06:04:50.153050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:58.683 [2024-07-13 06:04:50.153084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.480 ms 00:17:58.683 [2024-07-13 06:04:50.153102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.683 [2024-07-13 06:04:50.153274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.683 [2024-07-13 06:04:50.153297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:58.683 [2024-07-13 06:04:50.153311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:58.683 [2024-07-13 06:04:50.153332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.683 [2024-07-13 06:04:50.154273] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:58.683 [2024-07-13 06:04:50.155526] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 116.609 ms, result 0 00:17:58.683 [2024-07-13 06:04:50.156277] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:58.683 [2024-07-13 06:04:50.165790] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:09.108  Copying: 27/256 [MB] (27 MBps) Copying: 51/256 [MB] (24 MBps) Copying: 73/256 [MB] (21 MBps) Copying: 97/256 [MB] (23 MBps) Copying: 120/256 [MB] (23 MBps) Copying: 144/256 [MB] (23 MBps) Copying: 169/256 [MB] (24 MBps) Copying: 193/256 [MB] (24 MBps) Copying: 217/256 [MB] (24 MBps) Copying: 241/256 [MB] (23 MBps) Copying: 256/256 [MB] (average 24 MBps)[2024-07-13 06:05:00.771378] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:09.108 [2024-07-13 06:05:00.772663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.108 [2024-07-13 06:05:00.772831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:09.108 [2024-07-13 06:05:00.772961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:09.108 [2024-07-13 06:05:00.773124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.108 [2024-07-13 06:05:00.773269] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:09.108 [2024-07-13 06:05:00.773915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.108 [2024-07-13 06:05:00.773948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:09.108 [2024-07-13 06:05:00.773964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:18:09.108 [2024-07-13 06:05:00.773976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.108 [2024-07-13 06:05:00.774308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.108 [2024-07-13 06:05:00.774345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:09.108 [2024-07-13 06:05:00.774361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:18:09.108 [2024-07-13 06:05:00.774373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.108 [2024-07-13 06:05:00.778217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.108 [2024-07-13 06:05:00.778248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:09.108 [2024-07-13 06:05:00.778262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.821 ms 00:18:09.108 [2024-07-13 06:05:00.778273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.108 [2024-07-13 06:05:00.786308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.108 [2024-07-13 06:05:00.786339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:09.108 [2024-07-13 06:05:00.786361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.983 ms 00:18:09.108 [2024-07-13 06:05:00.786372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.108 [2024-07-13 06:05:00.787785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.108 [2024-07-13 06:05:00.787828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:09.108 [2024-07-13 06:05:00.787845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.338 ms 00:18:09.108 [2024-07-13 06:05:00.787857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.108 [2024-07-13 06:05:00.791082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.108 [2024-07-13 06:05:00.791124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:09.108 [2024-07-13 06:05:00.791171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.181 ms 00:18:09.108 [2024-07-13 06:05:00.791183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.108 [2024-07-13 06:05:00.791336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.108 [2024-07-13 06:05:00.791360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:09.108 [2024-07-13 06:05:00.791373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:18:09.108 [2024-07-13 06:05:00.791384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.108 [2024-07-13 06:05:00.793105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.108 [2024-07-13 06:05:00.793166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:09.108 [2024-07-13 06:05:00.793205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.697 ms 00:18:09.108 [2024-07-13 06:05:00.793216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.108 [2024-07-13 06:05:00.794531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.108 [2024-07-13 06:05:00.794569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:09.108 [2024-07-13 06:05:00.794584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.271 ms 00:18:09.108 [2024-07-13 06:05:00.794595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.108 [2024-07-13 06:05:00.795715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.108 [2024-07-13 06:05:00.795755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:09.108 [2024-07-13 06:05:00.795770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.079 ms 00:18:09.108 [2024-07-13 06:05:00.795781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.108 [2024-07-13 06:05:00.797040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.108 [2024-07-13 06:05:00.797079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:09.108 [2024-07-13 06:05:00.797094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.187 ms 00:18:09.108 [2024-07-13 06:05:00.797104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.108 [2024-07-13 06:05:00.797179] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:09.108 [2024-07-13 06:05:00.797223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:09.108 [2024-07-13 06:05:00.797579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.797993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:09.109 [2024-07-13 06:05:00.798454] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:09.109 [2024-07-13 06:05:00.798465] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1a49c5a3-2b59-4791-9c81-0428a1736fe5 00:18:09.109 [2024-07-13 06:05:00.798476] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:09.109 [2024-07-13 06:05:00.798486] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:09.109 [2024-07-13 06:05:00.798496] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:09.109 [2024-07-13 06:05:00.798506] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:09.109 [2024-07-13 06:05:00.798529] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:09.109 [2024-07-13 06:05:00.798547] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:09.109 [2024-07-13 06:05:00.798563] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:09.109 [2024-07-13 06:05:00.798573] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:09.109 [2024-07-13 06:05:00.798582] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:09.109 [2024-07-13 06:05:00.798592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.109 [2024-07-13 06:05:00.798607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:09.109 [2024-07-13 06:05:00.798618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.440 ms 00:18:09.109 [2024-07-13 06:05:00.798628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.109 [2024-07-13 06:05:00.800078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.109 [2024-07-13 06:05:00.800108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:09.109 [2024-07-13 06:05:00.800143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.407 ms 00:18:09.109 [2024-07-13 06:05:00.800167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.109 [2024-07-13 06:05:00.800252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.109 [2024-07-13 06:05:00.800278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:09.109 [2024-07-13 06:05:00.800291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:18:09.109 [2024-07-13 06:05:00.800302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.109 [2024-07-13 06:05:00.805291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.109 [2024-07-13 06:05:00.805326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:09.109 [2024-07-13 06:05:00.805347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.109 [2024-07-13 06:05:00.805359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.109 [2024-07-13 06:05:00.805425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.109 [2024-07-13 06:05:00.805441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:09.109 [2024-07-13 06:05:00.805453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.109 [2024-07-13 06:05:00.805475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.109 [2024-07-13 06:05:00.805534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.109 [2024-07-13 06:05:00.805552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:09.110 [2024-07-13 06:05:00.805565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.110 [2024-07-13 06:05:00.805582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.110 [2024-07-13 06:05:00.805607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.110 [2024-07-13 06:05:00.805621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:09.110 [2024-07-13 06:05:00.805648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.110 [2024-07-13 06:05:00.805659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.110 [2024-07-13 06:05:00.814555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.110 [2024-07-13 06:05:00.814615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:09.110 [2024-07-13 06:05:00.814639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.110 [2024-07-13 06:05:00.814650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.110 [2024-07-13 06:05:00.822011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.110 [2024-07-13 06:05:00.822066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:09.110 [2024-07-13 06:05:00.822082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.110 [2024-07-13 06:05:00.822093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.110 [2024-07-13 06:05:00.822175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.110 [2024-07-13 06:05:00.822204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:09.110 [2024-07-13 06:05:00.822216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.110 [2024-07-13 06:05:00.822227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.110 [2024-07-13 06:05:00.822282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.110 [2024-07-13 06:05:00.822296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:09.110 [2024-07-13 06:05:00.822307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.110 [2024-07-13 06:05:00.822317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.110 [2024-07-13 06:05:00.822411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.110 [2024-07-13 06:05:00.822430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:09.110 [2024-07-13 06:05:00.822441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.110 [2024-07-13 06:05:00.822452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.110 [2024-07-13 06:05:00.822505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.110 [2024-07-13 06:05:00.822538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:09.110 [2024-07-13 06:05:00.822566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.110 [2024-07-13 06:05:00.822578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.110 [2024-07-13 06:05:00.822624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.110 [2024-07-13 06:05:00.822641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:09.110 [2024-07-13 06:05:00.822653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.110 [2024-07-13 06:05:00.822664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.110 [2024-07-13 06:05:00.822723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.110 [2024-07-13 06:05:00.822752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:09.110 [2024-07-13 06:05:00.822765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.110 [2024-07-13 06:05:00.822776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.110 [2024-07-13 06:05:00.822948] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.248 ms, result 0 00:18:09.369 00:18:09.369 00:18:09.369 06:05:01 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:18:09.369 06:05:01 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:10.305 06:05:01 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:10.305 [2024-07-13 06:05:01.734111] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:10.305 [2024-07-13 06:05:01.734287] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90976 ] 00:18:10.306 [2024-07-13 06:05:01.880563] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.306 [2024-07-13 06:05:01.924477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.306 [2024-07-13 06:05:02.017263] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:10.306 [2024-07-13 06:05:02.017361] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:10.566 [2024-07-13 06:05:02.176416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.566 [2024-07-13 06:05:02.176479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:10.566 [2024-07-13 06:05:02.176500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:10.566 [2024-07-13 06:05:02.176512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.566 [2024-07-13 06:05:02.179218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.566 [2024-07-13 06:05:02.179257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:10.566 [2024-07-13 06:05:02.179273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.675 ms 00:18:10.566 [2024-07-13 06:05:02.179284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.566 [2024-07-13 06:05:02.179489] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:10.566 [2024-07-13 06:05:02.179798] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:10.566 [2024-07-13 06:05:02.179835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.566 [2024-07-13 06:05:02.179850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:10.566 [2024-07-13 06:05:02.179874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:18:10.566 [2024-07-13 06:05:02.179886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.566 [2024-07-13 06:05:02.181217] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:10.566 [2024-07-13 06:05:02.183352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.566 [2024-07-13 06:05:02.183389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:10.566 [2024-07-13 06:05:02.183405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.137 ms 00:18:10.566 [2024-07-13 06:05:02.183417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.566 [2024-07-13 06:05:02.183502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.566 [2024-07-13 06:05:02.183523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:10.566 [2024-07-13 06:05:02.183538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:10.566 [2024-07-13 06:05:02.183553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.566 [2024-07-13 06:05:02.187973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.566 [2024-07-13 06:05:02.188029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:10.566 [2024-07-13 06:05:02.188045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.360 ms 00:18:10.566 [2024-07-13 06:05:02.188055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.566 [2024-07-13 06:05:02.188241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.566 [2024-07-13 06:05:02.188271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:10.566 [2024-07-13 06:05:02.188290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:18:10.566 [2024-07-13 06:05:02.188304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.566 [2024-07-13 06:05:02.188358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.566 [2024-07-13 06:05:02.188376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:10.566 [2024-07-13 06:05:02.188389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:10.566 [2024-07-13 06:05:02.188400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.566 [2024-07-13 06:05:02.188433] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:10.566 [2024-07-13 06:05:02.189833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.566 [2024-07-13 06:05:02.189884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:10.566 [2024-07-13 06:05:02.189908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.409 ms 00:18:10.566 [2024-07-13 06:05:02.189919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.566 [2024-07-13 06:05:02.190001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.566 [2024-07-13 06:05:02.190019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:10.566 [2024-07-13 06:05:02.190043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:10.566 [2024-07-13 06:05:02.190063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.566 [2024-07-13 06:05:02.190101] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:10.566 [2024-07-13 06:05:02.190150] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:10.566 [2024-07-13 06:05:02.190207] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:10.566 [2024-07-13 06:05:02.190232] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:10.566 [2024-07-13 06:05:02.190337] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:10.566 [2024-07-13 06:05:02.190353] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:10.566 [2024-07-13 06:05:02.190379] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:10.566 [2024-07-13 06:05:02.190394] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:10.566 [2024-07-13 06:05:02.190408] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:10.566 [2024-07-13 06:05:02.190420] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:10.566 [2024-07-13 06:05:02.190432] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:10.566 [2024-07-13 06:05:02.190447] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:10.566 [2024-07-13 06:05:02.190476] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:10.566 [2024-07-13 06:05:02.190488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.566 [2024-07-13 06:05:02.190500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:10.566 [2024-07-13 06:05:02.190512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:18:10.566 [2024-07-13 06:05:02.190526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.566 [2024-07-13 06:05:02.190628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.566 [2024-07-13 06:05:02.190658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:10.566 [2024-07-13 06:05:02.190686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:10.566 [2024-07-13 06:05:02.190713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.566 [2024-07-13 06:05:02.190827] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:10.566 [2024-07-13 06:05:02.190857] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:10.566 [2024-07-13 06:05:02.190869] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:10.566 [2024-07-13 06:05:02.190881] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.566 [2024-07-13 06:05:02.190893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:10.566 [2024-07-13 06:05:02.190903] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:10.566 [2024-07-13 06:05:02.190928] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:10.566 [2024-07-13 06:05:02.190938] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:10.566 [2024-07-13 06:05:02.190948] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:10.566 [2024-07-13 06:05:02.190958] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:10.566 [2024-07-13 06:05:02.190968] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:10.566 [2024-07-13 06:05:02.190982] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:10.566 [2024-07-13 06:05:02.190993] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:10.566 [2024-07-13 06:05:02.191003] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:10.566 [2024-07-13 06:05:02.191014] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:10.566 [2024-07-13 06:05:02.191039] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.566 [2024-07-13 06:05:02.191049] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:10.566 [2024-07-13 06:05:02.191059] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:10.566 [2024-07-13 06:05:02.191068] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.566 [2024-07-13 06:05:02.191078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:10.566 [2024-07-13 06:05:02.191088] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:10.566 [2024-07-13 06:05:02.191097] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.566 [2024-07-13 06:05:02.191107] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:10.566 [2024-07-13 06:05:02.191116] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:10.566 [2024-07-13 06:05:02.191126] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.566 [2024-07-13 06:05:02.191135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:10.567 [2024-07-13 06:05:02.191145] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:10.567 [2024-07-13 06:05:02.191159] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.567 [2024-07-13 06:05:02.191170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:10.567 [2024-07-13 06:05:02.191180] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:10.567 [2024-07-13 06:05:02.191205] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.567 [2024-07-13 06:05:02.191217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:10.567 [2024-07-13 06:05:02.191227] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:10.567 [2024-07-13 06:05:02.191237] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:10.567 [2024-07-13 06:05:02.191247] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:10.567 [2024-07-13 06:05:02.191257] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:10.567 [2024-07-13 06:05:02.191266] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:10.567 [2024-07-13 06:05:02.191276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:10.567 [2024-07-13 06:05:02.191286] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:10.567 [2024-07-13 06:05:02.191295] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.567 [2024-07-13 06:05:02.191305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:10.567 [2024-07-13 06:05:02.191315] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:10.567 [2024-07-13 06:05:02.191325] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.567 [2024-07-13 06:05:02.191337] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:10.567 [2024-07-13 06:05:02.191349] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:10.567 [2024-07-13 06:05:02.191369] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:10.567 [2024-07-13 06:05:02.191379] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.567 [2024-07-13 06:05:02.191391] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:10.567 [2024-07-13 06:05:02.191401] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:10.567 [2024-07-13 06:05:02.191410] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:10.567 [2024-07-13 06:05:02.191421] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:10.567 [2024-07-13 06:05:02.191431] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:10.567 [2024-07-13 06:05:02.191441] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:10.567 [2024-07-13 06:05:02.191452] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:10.567 [2024-07-13 06:05:02.191465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:10.567 [2024-07-13 06:05:02.191489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:10.567 [2024-07-13 06:05:02.191500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:10.567 [2024-07-13 06:05:02.191511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:10.567 [2024-07-13 06:05:02.191522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:10.567 [2024-07-13 06:05:02.191536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:10.567 [2024-07-13 06:05:02.191547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:10.567 [2024-07-13 06:05:02.191558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:10.567 [2024-07-13 06:05:02.191569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:10.567 [2024-07-13 06:05:02.191579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:10.567 [2024-07-13 06:05:02.191590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:10.567 [2024-07-13 06:05:02.191601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:10.567 [2024-07-13 06:05:02.191612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:10.567 [2024-07-13 06:05:02.191622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:10.567 [2024-07-13 06:05:02.191633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:10.567 [2024-07-13 06:05:02.191655] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:10.567 [2024-07-13 06:05:02.191667] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:10.567 [2024-07-13 06:05:02.191679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:10.567 [2024-07-13 06:05:02.191690] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:10.567 [2024-07-13 06:05:02.191701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:10.567 [2024-07-13 06:05:02.191711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:10.567 [2024-07-13 06:05:02.191726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.567 [2024-07-13 06:05:02.191738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:10.567 [2024-07-13 06:05:02.191750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.962 ms 00:18:10.567 [2024-07-13 06:05:02.191761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.567 [2024-07-13 06:05:02.210989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.567 [2024-07-13 06:05:02.211063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:10.567 [2024-07-13 06:05:02.211085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.144 ms 00:18:10.567 [2024-07-13 06:05:02.211099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.567 [2024-07-13 06:05:02.211340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.567 [2024-07-13 06:05:02.211374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:10.567 [2024-07-13 06:05:02.211388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:10.567 [2024-07-13 06:05:02.211399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.567 [2024-07-13 06:05:02.219433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.567 [2024-07-13 06:05:02.219504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:10.567 [2024-07-13 06:05:02.219520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.998 ms 00:18:10.567 [2024-07-13 06:05:02.219539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.567 [2024-07-13 06:05:02.219627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.567 [2024-07-13 06:05:02.219645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:10.567 [2024-07-13 06:05:02.219658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:10.567 [2024-07-13 06:05:02.219669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.567 [2024-07-13 06:05:02.220041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.567 [2024-07-13 06:05:02.220070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:10.567 [2024-07-13 06:05:02.220084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:18:10.567 [2024-07-13 06:05:02.220095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.567 [2024-07-13 06:05:02.220278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.567 [2024-07-13 06:05:02.220301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:10.567 [2024-07-13 06:05:02.220315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:18:10.567 [2024-07-13 06:05:02.220326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.567 [2024-07-13 06:05:02.225373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.567 [2024-07-13 06:05:02.225412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:10.567 [2024-07-13 06:05:02.225428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.016 ms 00:18:10.567 [2024-07-13 06:05:02.225452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.567 [2024-07-13 06:05:02.227869] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:10.567 [2024-07-13 06:05:02.227926] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:10.567 [2024-07-13 06:05:02.227945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.567 [2024-07-13 06:05:02.227957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:10.567 [2024-07-13 06:05:02.227968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.301 ms 00:18:10.567 [2024-07-13 06:05:02.227979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.567 [2024-07-13 06:05:02.245944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.567 [2024-07-13 06:05:02.246001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:10.567 [2024-07-13 06:05:02.246025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.908 ms 00:18:10.567 [2024-07-13 06:05:02.246038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.567 [2024-07-13 06:05:02.248111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.567 [2024-07-13 06:05:02.248172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:10.567 [2024-07-13 06:05:02.248187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.948 ms 00:18:10.567 [2024-07-13 06:05:02.248197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.567 [2024-07-13 06:05:02.249924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.567 [2024-07-13 06:05:02.249972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:10.567 [2024-07-13 06:05:02.249986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.670 ms 00:18:10.567 [2024-07-13 06:05:02.249996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.567 [2024-07-13 06:05:02.250453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.567 [2024-07-13 06:05:02.250483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:10.567 [2024-07-13 06:05:02.250498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:18:10.567 [2024-07-13 06:05:02.250509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.567 [2024-07-13 06:05:02.266276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.567 [2024-07-13 06:05:02.266342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:10.567 [2024-07-13 06:05:02.266361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.715 ms 00:18:10.567 [2024-07-13 06:05:02.266372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.567 [2024-07-13 06:05:02.274385] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:10.568 [2024-07-13 06:05:02.287919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.568 [2024-07-13 06:05:02.287988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:10.568 [2024-07-13 06:05:02.288005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.430 ms 00:18:10.568 [2024-07-13 06:05:02.288016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.568 [2024-07-13 06:05:02.288161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.568 [2024-07-13 06:05:02.288202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:10.568 [2024-07-13 06:05:02.288215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:10.568 [2024-07-13 06:05:02.288225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.568 [2024-07-13 06:05:02.288293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.568 [2024-07-13 06:05:02.288344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:10.568 [2024-07-13 06:05:02.288357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:10.568 [2024-07-13 06:05:02.288368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.568 [2024-07-13 06:05:02.288431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.568 [2024-07-13 06:05:02.288448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:10.568 [2024-07-13 06:05:02.288476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:10.568 [2024-07-13 06:05:02.288487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.568 [2024-07-13 06:05:02.288533] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:10.568 [2024-07-13 06:05:02.288552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.568 [2024-07-13 06:05:02.288563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:10.568 [2024-07-13 06:05:02.288575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:10.568 [2024-07-13 06:05:02.288586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.828 [2024-07-13 06:05:02.292304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.828 [2024-07-13 06:05:02.292366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:10.828 [2024-07-13 06:05:02.292381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.690 ms 00:18:10.828 [2024-07-13 06:05:02.292398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.828 [2024-07-13 06:05:02.292505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.828 [2024-07-13 06:05:02.292524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:10.828 [2024-07-13 06:05:02.292549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:10.828 [2024-07-13 06:05:02.292574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.828 [2024-07-13 06:05:02.293606] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:10.828 [2024-07-13 06:05:02.294858] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 116.875 ms, result 0 00:18:10.828 [2024-07-13 06:05:02.295642] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:10.828 [2024-07-13 06:05:02.305034] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:10.828  Copying: 4096/4096 [kB] (average 24 MBps)[2024-07-13 06:05:02.467630] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:10.828 [2024-07-13 06:05:02.468731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.828 [2024-07-13 06:05:02.468787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:10.828 [2024-07-13 06:05:02.468806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:10.828 [2024-07-13 06:05:02.468818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.828 [2024-07-13 06:05:02.468849] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:10.828 [2024-07-13 06:05:02.469342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.828 [2024-07-13 06:05:02.469363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:10.828 [2024-07-13 06:05:02.469375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:18:10.828 [2024-07-13 06:05:02.469399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.828 [2024-07-13 06:05:02.470966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.828 [2024-07-13 06:05:02.471026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:10.828 [2024-07-13 06:05:02.471042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.536 ms 00:18:10.828 [2024-07-13 06:05:02.471053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.828 [2024-07-13 06:05:02.475219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.828 [2024-07-13 06:05:02.475255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:10.828 [2024-07-13 06:05:02.475269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.141 ms 00:18:10.828 [2024-07-13 06:05:02.475281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.828 [2024-07-13 06:05:02.482772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.828 [2024-07-13 06:05:02.482820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:10.828 [2024-07-13 06:05:02.482842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.450 ms 00:18:10.828 [2024-07-13 06:05:02.482853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.828 [2024-07-13 06:05:02.484093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.828 [2024-07-13 06:05:02.484155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:10.828 [2024-07-13 06:05:02.484171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.130 ms 00:18:10.828 [2024-07-13 06:05:02.484183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.828 [2024-07-13 06:05:02.487263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.828 [2024-07-13 06:05:02.487324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:10.828 [2024-07-13 06:05:02.487338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.040 ms 00:18:10.828 [2024-07-13 06:05:02.487349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.828 [2024-07-13 06:05:02.487482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.828 [2024-07-13 06:05:02.487512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:10.828 [2024-07-13 06:05:02.487540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:18:10.828 [2024-07-13 06:05:02.487551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.828 [2024-07-13 06:05:02.489498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.828 [2024-07-13 06:05:02.489564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:10.828 [2024-07-13 06:05:02.489579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.914 ms 00:18:10.828 [2024-07-13 06:05:02.489589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.828 [2024-07-13 06:05:02.491022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.828 [2024-07-13 06:05:02.491070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:10.828 [2024-07-13 06:05:02.491100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.392 ms 00:18:10.828 [2024-07-13 06:05:02.491111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.828 [2024-07-13 06:05:02.492111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.828 [2024-07-13 06:05:02.492157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:10.828 [2024-07-13 06:05:02.492172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:18:10.828 [2024-07-13 06:05:02.492182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.828 [2024-07-13 06:05:02.493358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.828 [2024-07-13 06:05:02.493393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:10.828 [2024-07-13 06:05:02.493407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.105 ms 00:18:10.828 [2024-07-13 06:05:02.493417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.828 [2024-07-13 06:05:02.493459] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:10.828 [2024-07-13 06:05:02.493484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:10.828 [2024-07-13 06:05:02.493499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.493997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:10.829 [2024-07-13 06:05:02.494697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:10.830 [2024-07-13 06:05:02.494709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:10.830 [2024-07-13 06:05:02.494721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:10.830 [2024-07-13 06:05:02.494741] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:10.830 [2024-07-13 06:05:02.494764] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1a49c5a3-2b59-4791-9c81-0428a1736fe5 00:18:10.830 [2024-07-13 06:05:02.494776] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:10.830 [2024-07-13 06:05:02.494787] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:10.830 [2024-07-13 06:05:02.494797] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:10.830 [2024-07-13 06:05:02.494809] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:10.830 [2024-07-13 06:05:02.494824] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:10.830 [2024-07-13 06:05:02.494835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:10.830 [2024-07-13 06:05:02.494857] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:10.830 [2024-07-13 06:05:02.494867] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:10.830 [2024-07-13 06:05:02.494877] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:10.830 [2024-07-13 06:05:02.494888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.830 [2024-07-13 06:05:02.494904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:10.830 [2024-07-13 06:05:02.494916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.431 ms 00:18:10.830 [2024-07-13 06:05:02.494943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.830 [2024-07-13 06:05:02.496301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.830 [2024-07-13 06:05:02.496323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:10.830 [2024-07-13 06:05:02.496344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.333 ms 00:18:10.830 [2024-07-13 06:05:02.496364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.830 [2024-07-13 06:05:02.496445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.830 [2024-07-13 06:05:02.496472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:10.830 [2024-07-13 06:05:02.496484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:10.830 [2024-07-13 06:05:02.496503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.830 [2024-07-13 06:05:02.501233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.830 [2024-07-13 06:05:02.501272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:10.830 [2024-07-13 06:05:02.501287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.830 [2024-07-13 06:05:02.501298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.830 [2024-07-13 06:05:02.501364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.830 [2024-07-13 06:05:02.501392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:10.830 [2024-07-13 06:05:02.501404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.830 [2024-07-13 06:05:02.501415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.830 [2024-07-13 06:05:02.501478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.830 [2024-07-13 06:05:02.501504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:10.830 [2024-07-13 06:05:02.501521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.830 [2024-07-13 06:05:02.501532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.830 [2024-07-13 06:05:02.501558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.830 [2024-07-13 06:05:02.501572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:10.830 [2024-07-13 06:05:02.501586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.830 [2024-07-13 06:05:02.501597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.830 [2024-07-13 06:05:02.509865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.830 [2024-07-13 06:05:02.509928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:10.830 [2024-07-13 06:05:02.509952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.830 [2024-07-13 06:05:02.509973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.830 [2024-07-13 06:05:02.516733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.830 [2024-07-13 06:05:02.516798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:10.830 [2024-07-13 06:05:02.516814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.830 [2024-07-13 06:05:02.516825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.830 [2024-07-13 06:05:02.516884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.830 [2024-07-13 06:05:02.516900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:10.830 [2024-07-13 06:05:02.516921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.830 [2024-07-13 06:05:02.516932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.830 [2024-07-13 06:05:02.517006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.830 [2024-07-13 06:05:02.517021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:10.830 [2024-07-13 06:05:02.517032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.830 [2024-07-13 06:05:02.517043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.830 [2024-07-13 06:05:02.517140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.830 [2024-07-13 06:05:02.517187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:10.830 [2024-07-13 06:05:02.517203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.830 [2024-07-13 06:05:02.517214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.830 [2024-07-13 06:05:02.517276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.830 [2024-07-13 06:05:02.517295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:10.830 [2024-07-13 06:05:02.517322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.830 [2024-07-13 06:05:02.517334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.830 [2024-07-13 06:05:02.517390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.830 [2024-07-13 06:05:02.517406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:10.830 [2024-07-13 06:05:02.517418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.830 [2024-07-13 06:05:02.517429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.830 [2024-07-13 06:05:02.517500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.830 [2024-07-13 06:05:02.517535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:10.830 [2024-07-13 06:05:02.517547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.830 [2024-07-13 06:05:02.517559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.830 [2024-07-13 06:05:02.517724] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 48.985 ms, result 0 00:18:11.090 00:18:11.090 00:18:11.090 06:05:02 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=90990 00:18:11.090 06:05:02 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:11.090 06:05:02 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 90990 00:18:11.090 06:05:02 ftl.ftl_trim -- common/autotest_common.sh@829 -- # '[' -z 90990 ']' 00:18:11.090 06:05:02 ftl.ftl_trim -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.090 06:05:02 ftl.ftl_trim -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:11.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.090 06:05:02 ftl.ftl_trim -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.090 06:05:02 ftl.ftl_trim -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:11.090 06:05:02 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:11.349 [2024-07-13 06:05:02.834855] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:11.349 [2024-07-13 06:05:02.835016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90990 ] 00:18:11.349 [2024-07-13 06:05:02.976741] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.349 [2024-07-13 06:05:03.013640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.282 06:05:03 ftl.ftl_trim -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:12.282 06:05:03 ftl.ftl_trim -- common/autotest_common.sh@862 -- # return 0 00:18:12.282 06:05:03 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:12.282 [2024-07-13 06:05:03.982121] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:12.282 [2024-07-13 06:05:03.982210] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:12.543 [2024-07-13 06:05:04.162892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.543 [2024-07-13 06:05:04.162994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:12.543 [2024-07-13 06:05:04.163062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:18:12.543 [2024-07-13 06:05:04.163080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.543 [2024-07-13 06:05:04.166538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.543 [2024-07-13 06:05:04.166684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:12.543 [2024-07-13 06:05:04.166738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.401 ms 00:18:12.543 [2024-07-13 06:05:04.166755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.543 [2024-07-13 06:05:04.166981] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:12.543 [2024-07-13 06:05:04.167374] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:12.543 [2024-07-13 06:05:04.167452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.543 [2024-07-13 06:05:04.167477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:12.543 [2024-07-13 06:05:04.167539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.471 ms 00:18:12.543 [2024-07-13 06:05:04.167554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.543 [2024-07-13 06:05:04.169434] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:12.543 [2024-07-13 06:05:04.172355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.543 [2024-07-13 06:05:04.172422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:12.543 [2024-07-13 06:05:04.172447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.929 ms 00:18:12.543 [2024-07-13 06:05:04.172473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.543 [2024-07-13 06:05:04.172632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.543 [2024-07-13 06:05:04.172675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:12.543 [2024-07-13 06:05:04.172690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:12.543 [2024-07-13 06:05:04.172737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.543 [2024-07-13 06:05:04.178660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.543 [2024-07-13 06:05:04.178728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:12.543 [2024-07-13 06:05:04.178780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.825 ms 00:18:12.543 [2024-07-13 06:05:04.178800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.543 [2024-07-13 06:05:04.179039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.543 [2024-07-13 06:05:04.179101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:12.543 [2024-07-13 06:05:04.179116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:18:12.543 [2024-07-13 06:05:04.179169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.543 [2024-07-13 06:05:04.179245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.543 [2024-07-13 06:05:04.179266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:12.543 [2024-07-13 06:05:04.179310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:12.543 [2024-07-13 06:05:04.179353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.543 [2024-07-13 06:05:04.179412] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:12.543 [2024-07-13 06:05:04.181248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.543 [2024-07-13 06:05:04.181296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:12.543 [2024-07-13 06:05:04.181337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.839 ms 00:18:12.543 [2024-07-13 06:05:04.181359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.543 [2024-07-13 06:05:04.181495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.543 [2024-07-13 06:05:04.181565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:12.543 [2024-07-13 06:05:04.181640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:12.543 [2024-07-13 06:05:04.181656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.543 [2024-07-13 06:05:04.181691] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:12.543 [2024-07-13 06:05:04.181737] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:12.543 [2024-07-13 06:05:04.181819] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:12.543 [2024-07-13 06:05:04.181861] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:12.543 [2024-07-13 06:05:04.182024] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:12.543 [2024-07-13 06:05:04.182055] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:12.543 [2024-07-13 06:05:04.182093] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:12.543 [2024-07-13 06:05:04.182122] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:12.543 [2024-07-13 06:05:04.182170] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:12.543 [2024-07-13 06:05:04.182198] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:12.543 [2024-07-13 06:05:04.182243] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:12.543 [2024-07-13 06:05:04.182257] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:12.543 [2024-07-13 06:05:04.182272] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:12.543 [2024-07-13 06:05:04.182284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.543 [2024-07-13 06:05:04.182297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:12.543 [2024-07-13 06:05:04.182317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:18:12.543 [2024-07-13 06:05:04.182331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.543 [2024-07-13 06:05:04.182458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.543 [2024-07-13 06:05:04.182483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:12.543 [2024-07-13 06:05:04.182496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:12.543 [2024-07-13 06:05:04.182537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.543 [2024-07-13 06:05:04.182710] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:12.543 [2024-07-13 06:05:04.182751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:12.544 [2024-07-13 06:05:04.182774] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:12.544 [2024-07-13 06:05:04.182810] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.544 [2024-07-13 06:05:04.182822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:12.544 [2024-07-13 06:05:04.182837] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:12.544 [2024-07-13 06:05:04.182849] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:12.544 [2024-07-13 06:05:04.182862] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:12.544 [2024-07-13 06:05:04.182882] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:12.544 [2024-07-13 06:05:04.182907] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:12.544 [2024-07-13 06:05:04.182956] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:12.544 [2024-07-13 06:05:04.182975] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:12.544 [2024-07-13 06:05:04.183020] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:12.544 [2024-07-13 06:05:04.183057] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:12.544 [2024-07-13 06:05:04.183076] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:12.544 [2024-07-13 06:05:04.183114] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.544 [2024-07-13 06:05:04.183125] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:12.544 [2024-07-13 06:05:04.183179] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:12.544 [2024-07-13 06:05:04.183192] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.544 [2024-07-13 06:05:04.183218] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:12.544 [2024-07-13 06:05:04.183238] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:12.544 [2024-07-13 06:05:04.183265] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:12.544 [2024-07-13 06:05:04.183277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:12.544 [2024-07-13 06:05:04.183291] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:12.544 [2024-07-13 06:05:04.183301] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:12.544 [2024-07-13 06:05:04.183313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:12.544 [2024-07-13 06:05:04.183339] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:12.544 [2024-07-13 06:05:04.183368] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:12.544 [2024-07-13 06:05:04.183380] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:12.544 [2024-07-13 06:05:04.183393] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:12.544 [2024-07-13 06:05:04.183418] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:12.544 [2024-07-13 06:05:04.183429] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:12.544 [2024-07-13 06:05:04.183439] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:12.544 [2024-07-13 06:05:04.183466] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:12.544 [2024-07-13 06:05:04.183484] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:12.544 [2024-07-13 06:05:04.183505] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:12.544 [2024-07-13 06:05:04.183538] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:12.544 [2024-07-13 06:05:04.183626] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:12.544 [2024-07-13 06:05:04.183653] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:12.544 [2024-07-13 06:05:04.183676] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.544 [2024-07-13 06:05:04.183717] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:12.544 [2024-07-13 06:05:04.183733] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:12.544 [2024-07-13 06:05:04.183745] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.544 [2024-07-13 06:05:04.183758] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:12.544 [2024-07-13 06:05:04.183771] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:12.544 [2024-07-13 06:05:04.183784] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:12.544 [2024-07-13 06:05:04.183796] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.544 [2024-07-13 06:05:04.183826] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:12.544 [2024-07-13 06:05:04.183848] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:12.544 [2024-07-13 06:05:04.183870] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:12.544 [2024-07-13 06:05:04.183903] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:12.544 [2024-07-13 06:05:04.183921] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:12.544 [2024-07-13 06:05:04.183948] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:12.544 [2024-07-13 06:05:04.183987] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:12.544 [2024-07-13 06:05:04.184011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:12.544 [2024-07-13 06:05:04.184037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:12.544 [2024-07-13 06:05:04.184049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:12.544 [2024-07-13 06:05:04.184061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:12.544 [2024-07-13 06:05:04.184071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:12.544 [2024-07-13 06:05:04.184083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:12.544 [2024-07-13 06:05:04.184094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:12.544 [2024-07-13 06:05:04.184106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:12.544 [2024-07-13 06:05:04.184116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:12.544 [2024-07-13 06:05:04.184128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:12.544 [2024-07-13 06:05:04.184167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:12.544 [2024-07-13 06:05:04.184213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:12.544 [2024-07-13 06:05:04.184228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:12.544 [2024-07-13 06:05:04.184241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:12.544 [2024-07-13 06:05:04.184251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:12.544 [2024-07-13 06:05:04.184265] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:12.544 [2024-07-13 06:05:04.184279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:12.544 [2024-07-13 06:05:04.184293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:12.544 [2024-07-13 06:05:04.184304] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:12.544 [2024-07-13 06:05:04.184328] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:12.544 [2024-07-13 06:05:04.184359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:12.544 [2024-07-13 06:05:04.184392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.544 [2024-07-13 06:05:04.184405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:12.544 [2024-07-13 06:05:04.184419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.740 ms 00:18:12.544 [2024-07-13 06:05:04.184440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.544 [2024-07-13 06:05:04.195578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.544 [2024-07-13 06:05:04.195648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:12.544 [2024-07-13 06:05:04.195688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.939 ms 00:18:12.544 [2024-07-13 06:05:04.195710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.544 [2024-07-13 06:05:04.195969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.544 [2024-07-13 06:05:04.196006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:12.544 [2024-07-13 06:05:04.196037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:18:12.544 [2024-07-13 06:05:04.196121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.544 [2024-07-13 06:05:04.207216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.544 [2024-07-13 06:05:04.207270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:12.544 [2024-07-13 06:05:04.207299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.017 ms 00:18:12.544 [2024-07-13 06:05:04.207321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.544 [2024-07-13 06:05:04.207424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.544 [2024-07-13 06:05:04.207443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:12.544 [2024-07-13 06:05:04.207457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:12.544 [2024-07-13 06:05:04.207484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.544 [2024-07-13 06:05:04.207972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.544 [2024-07-13 06:05:04.208065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:12.544 [2024-07-13 06:05:04.208119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:18:12.544 [2024-07-13 06:05:04.208154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.544 [2024-07-13 06:05:04.208347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.544 [2024-07-13 06:05:04.208387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:12.544 [2024-07-13 06:05:04.208438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:18:12.544 [2024-07-13 06:05:04.208451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.544 [2024-07-13 06:05:04.216108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.544 [2024-07-13 06:05:04.216189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:12.544 [2024-07-13 06:05:04.216225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.619 ms 00:18:12.544 [2024-07-13 06:05:04.216265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.544 [2024-07-13 06:05:04.219429] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:12.545 [2024-07-13 06:05:04.219503] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:12.545 [2024-07-13 06:05:04.219541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.545 [2024-07-13 06:05:04.219572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:12.545 [2024-07-13 06:05:04.219588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.100 ms 00:18:12.545 [2024-07-13 06:05:04.219609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.545 [2024-07-13 06:05:04.237989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.545 [2024-07-13 06:05:04.238061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:12.545 [2024-07-13 06:05:04.238100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.185 ms 00:18:12.545 [2024-07-13 06:05:04.238155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.545 [2024-07-13 06:05:04.240490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.545 [2024-07-13 06:05:04.240572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:12.545 [2024-07-13 06:05:04.240593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.199 ms 00:18:12.545 [2024-07-13 06:05:04.240606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.545 [2024-07-13 06:05:04.242957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.545 [2024-07-13 06:05:04.243014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:12.545 [2024-07-13 06:05:04.243041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.282 ms 00:18:12.545 [2024-07-13 06:05:04.243065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.545 [2024-07-13 06:05:04.243571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.545 [2024-07-13 06:05:04.243616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:12.545 [2024-07-13 06:05:04.243635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:18:12.545 [2024-07-13 06:05:04.243663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.805 [2024-07-13 06:05:04.273002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.805 [2024-07-13 06:05:04.273086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:12.805 [2024-07-13 06:05:04.273121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.266 ms 00:18:12.805 [2024-07-13 06:05:04.273135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.805 [2024-07-13 06:05:04.282025] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:12.805 [2024-07-13 06:05:04.296619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.805 [2024-07-13 06:05:04.296702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:12.805 [2024-07-13 06:05:04.296734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.268 ms 00:18:12.805 [2024-07-13 06:05:04.296750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.805 [2024-07-13 06:05:04.296905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.805 [2024-07-13 06:05:04.296929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:12.805 [2024-07-13 06:05:04.296946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:12.805 [2024-07-13 06:05:04.296959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.805 [2024-07-13 06:05:04.297023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.805 [2024-07-13 06:05:04.297056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:12.805 [2024-07-13 06:05:04.297069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:12.805 [2024-07-13 06:05:04.297084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.805 [2024-07-13 06:05:04.297117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.805 [2024-07-13 06:05:04.297136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:12.805 [2024-07-13 06:05:04.297148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:12.805 [2024-07-13 06:05:04.297233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.805 [2024-07-13 06:05:04.297295] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:12.805 [2024-07-13 06:05:04.297318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.805 [2024-07-13 06:05:04.297332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:12.805 [2024-07-13 06:05:04.297348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:12.805 [2024-07-13 06:05:04.297361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.805 [2024-07-13 06:05:04.301324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.805 [2024-07-13 06:05:04.301365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:12.805 [2024-07-13 06:05:04.301387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.926 ms 00:18:12.805 [2024-07-13 06:05:04.301400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.805 [2024-07-13 06:05:04.301551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.805 [2024-07-13 06:05:04.301575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:12.805 [2024-07-13 06:05:04.301600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:12.805 [2024-07-13 06:05:04.301628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.805 [2024-07-13 06:05:04.302611] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:12.805 [2024-07-13 06:05:04.303865] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 139.505 ms, result 0 00:18:12.805 [2024-07-13 06:05:04.304860] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:12.805 Some configs were skipped because the RPC state that can call them passed over. 00:18:12.805 06:05:04 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:13.066 [2024-07-13 06:05:04.555984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.066 [2024-07-13 06:05:04.556075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:13.066 [2024-07-13 06:05:04.556111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.664 ms 00:18:13.066 [2024-07-13 06:05:04.556125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.066 [2024-07-13 06:05:04.556247] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.899 ms, result 0 00:18:13.066 true 00:18:13.066 06:05:04 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:13.326 [2024-07-13 06:05:04.823742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.326 [2024-07-13 06:05:04.823796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:13.326 [2024-07-13 06:05:04.823820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.158 ms 00:18:13.326 [2024-07-13 06:05:04.823834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.326 [2024-07-13 06:05:04.823886] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.319 ms, result 0 00:18:13.326 true 00:18:13.326 06:05:04 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 90990 00:18:13.326 06:05:04 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 90990 ']' 00:18:13.326 06:05:04 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 90990 00:18:13.326 06:05:04 ftl.ftl_trim -- common/autotest_common.sh@953 -- # uname 00:18:13.326 06:05:04 ftl.ftl_trim -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:13.326 06:05:04 ftl.ftl_trim -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90990 00:18:13.326 06:05:04 ftl.ftl_trim -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:13.326 06:05:04 ftl.ftl_trim -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:13.326 killing process with pid 90990 00:18:13.326 06:05:04 ftl.ftl_trim -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90990' 00:18:13.326 06:05:04 ftl.ftl_trim -- common/autotest_common.sh@967 -- # kill 90990 00:18:13.326 06:05:04 ftl.ftl_trim -- common/autotest_common.sh@972 -- # wait 90990 00:18:13.326 [2024-07-13 06:05:04.968589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.326 [2024-07-13 06:05:04.968662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:13.326 [2024-07-13 06:05:04.968683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:13.326 [2024-07-13 06:05:04.968699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.326 [2024-07-13 06:05:04.968742] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:13.326 [2024-07-13 06:05:04.969301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.326 [2024-07-13 06:05:04.969334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:13.326 [2024-07-13 06:05:04.969352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:18:13.326 [2024-07-13 06:05:04.969367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.326 [2024-07-13 06:05:04.969705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.326 [2024-07-13 06:05:04.969735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:13.326 [2024-07-13 06:05:04.969753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:18:13.326 [2024-07-13 06:05:04.969775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.326 [2024-07-13 06:05:04.974112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.326 [2024-07-13 06:05:04.974190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:13.326 [2024-07-13 06:05:04.974214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.303 ms 00:18:13.326 [2024-07-13 06:05:04.974228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.326 [2024-07-13 06:05:04.982355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.326 [2024-07-13 06:05:04.982404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:13.326 [2024-07-13 06:05:04.982423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.066 ms 00:18:13.326 [2024-07-13 06:05:04.982436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.326 [2024-07-13 06:05:04.983768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.326 [2024-07-13 06:05:04.983807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:13.326 [2024-07-13 06:05:04.983827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.217 ms 00:18:13.326 [2024-07-13 06:05:04.983856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.326 [2024-07-13 06:05:04.986762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.326 [2024-07-13 06:05:04.986801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:13.326 [2024-07-13 06:05:04.986821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.856 ms 00:18:13.326 [2024-07-13 06:05:04.986836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.326 [2024-07-13 06:05:04.987004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.326 [2024-07-13 06:05:04.987031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:13.326 [2024-07-13 06:05:04.987049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:18:13.326 [2024-07-13 06:05:04.987062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.326 [2024-07-13 06:05:04.989039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.326 [2024-07-13 06:05:04.989077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:13.326 [2024-07-13 06:05:04.989096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.945 ms 00:18:13.326 [2024-07-13 06:05:04.989110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.326 [2024-07-13 06:05:04.990459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.326 [2024-07-13 06:05:04.990494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:13.326 [2024-07-13 06:05:04.990512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.265 ms 00:18:13.326 [2024-07-13 06:05:04.990525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.326 [2024-07-13 06:05:04.991617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.326 [2024-07-13 06:05:04.991663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:13.326 [2024-07-13 06:05:04.991682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.040 ms 00:18:13.326 [2024-07-13 06:05:04.991694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.326 [2024-07-13 06:05:04.992822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.326 [2024-07-13 06:05:04.992858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:13.326 [2024-07-13 06:05:04.992876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.047 ms 00:18:13.326 [2024-07-13 06:05:04.992888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.326 [2024-07-13 06:05:04.992935] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:13.326 [2024-07-13 06:05:04.992961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:13.326 [2024-07-13 06:05:04.992979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:13.326 [2024-07-13 06:05:04.992993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:13.326 [2024-07-13 06:05:04.993010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:13.326 [2024-07-13 06:05:04.993023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:13.326 [2024-07-13 06:05:04.993039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:13.326 [2024-07-13 06:05:04.993052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:13.326 [2024-07-13 06:05:04.993066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:13.326 [2024-07-13 06:05:04.993080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:13.326 [2024-07-13 06:05:04.993095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:13.326 [2024-07-13 06:05:04.993119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:13.326 [2024-07-13 06:05:04.993135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.993989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:13.327 [2024-07-13 06:05:04.994576] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:13.327 [2024-07-13 06:05:04.994591] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1a49c5a3-2b59-4791-9c81-0428a1736fe5 00:18:13.327 [2024-07-13 06:05:04.994604] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:13.327 [2024-07-13 06:05:04.994618] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:13.327 [2024-07-13 06:05:04.994633] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:13.327 [2024-07-13 06:05:04.994647] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:13.327 [2024-07-13 06:05:04.994659] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:13.327 [2024-07-13 06:05:04.994684] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:13.327 [2024-07-13 06:05:04.994696] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:13.328 [2024-07-13 06:05:04.994709] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:13.328 [2024-07-13 06:05:04.994721] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:13.328 [2024-07-13 06:05:04.994736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.328 [2024-07-13 06:05:04.994748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:13.328 [2024-07-13 06:05:04.994765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.804 ms 00:18:13.328 [2024-07-13 06:05:04.994778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.328 [2024-07-13 06:05:04.996241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.328 [2024-07-13 06:05:04.996262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:13.328 [2024-07-13 06:05:04.996279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.430 ms 00:18:13.328 [2024-07-13 06:05:04.996290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.328 [2024-07-13 06:05:04.996392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.328 [2024-07-13 06:05:04.996442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:13.328 [2024-07-13 06:05:04.996458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:13.328 [2024-07-13 06:05:04.996470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.328 [2024-07-13 06:05:05.002110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.328 [2024-07-13 06:05:05.002211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:13.328 [2024-07-13 06:05:05.002234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.328 [2024-07-13 06:05:05.002258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.328 [2024-07-13 06:05:05.002353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.328 [2024-07-13 06:05:05.002372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:13.328 [2024-07-13 06:05:05.002388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.328 [2024-07-13 06:05:05.002401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.328 [2024-07-13 06:05:05.002483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.328 [2024-07-13 06:05:05.002503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:13.328 [2024-07-13 06:05:05.002549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.328 [2024-07-13 06:05:05.002562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.328 [2024-07-13 06:05:05.002615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.328 [2024-07-13 06:05:05.002632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:13.328 [2024-07-13 06:05:05.002647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.328 [2024-07-13 06:05:05.002660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.328 [2024-07-13 06:05:05.011570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.328 [2024-07-13 06:05:05.011629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:13.328 [2024-07-13 06:05:05.011651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.328 [2024-07-13 06:05:05.011664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.328 [2024-07-13 06:05:05.018740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.328 [2024-07-13 06:05:05.018790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:13.328 [2024-07-13 06:05:05.018812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.328 [2024-07-13 06:05:05.018825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.328 [2024-07-13 06:05:05.018932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.328 [2024-07-13 06:05:05.018955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:13.328 [2024-07-13 06:05:05.018971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.328 [2024-07-13 06:05:05.018984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.328 [2024-07-13 06:05:05.019029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.328 [2024-07-13 06:05:05.019056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:13.328 [2024-07-13 06:05:05.019073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.328 [2024-07-13 06:05:05.019086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.328 [2024-07-13 06:05:05.019207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.328 [2024-07-13 06:05:05.019229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:13.328 [2024-07-13 06:05:05.019249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.328 [2024-07-13 06:05:05.019262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.328 [2024-07-13 06:05:05.019332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.328 [2024-07-13 06:05:05.019352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:13.328 [2024-07-13 06:05:05.019368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.328 [2024-07-13 06:05:05.019381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.328 [2024-07-13 06:05:05.019436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.328 [2024-07-13 06:05:05.019454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:13.328 [2024-07-13 06:05:05.019470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.328 [2024-07-13 06:05:05.019485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.328 [2024-07-13 06:05:05.019546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.328 [2024-07-13 06:05:05.019564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:13.328 [2024-07-13 06:05:05.019579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.328 [2024-07-13 06:05:05.019592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.328 [2024-07-13 06:05:05.019762] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 51.141 ms, result 0 00:18:13.587 06:05:05 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:13.846 [2024-07-13 06:05:05.315489] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:13.846 [2024-07-13 06:05:05.315704] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91026 ] 00:18:13.846 [2024-07-13 06:05:05.466392] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.846 [2024-07-13 06:05:05.502238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.106 [2024-07-13 06:05:05.586591] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:14.106 [2024-07-13 06:05:05.586684] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:14.106 [2024-07-13 06:05:05.743848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.106 [2024-07-13 06:05:05.743921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:14.106 [2024-07-13 06:05:05.743940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:14.106 [2024-07-13 06:05:05.743950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.106 [2024-07-13 06:05:05.746693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.106 [2024-07-13 06:05:05.746746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:14.106 [2024-07-13 06:05:05.746763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.703 ms 00:18:14.106 [2024-07-13 06:05:05.746774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.106 [2024-07-13 06:05:05.746905] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:14.106 [2024-07-13 06:05:05.747234] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:14.106 [2024-07-13 06:05:05.747267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.106 [2024-07-13 06:05:05.747282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:14.106 [2024-07-13 06:05:05.747307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:18:14.106 [2024-07-13 06:05:05.747319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.106 [2024-07-13 06:05:05.748677] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:14.106 [2024-07-13 06:05:05.751041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.106 [2024-07-13 06:05:05.751101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:14.106 [2024-07-13 06:05:05.751116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.366 ms 00:18:14.106 [2024-07-13 06:05:05.751128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.106 [2024-07-13 06:05:05.751236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.106 [2024-07-13 06:05:05.751264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:14.106 [2024-07-13 06:05:05.751278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:14.106 [2024-07-13 06:05:05.751302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.106 [2024-07-13 06:05:05.755780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.106 [2024-07-13 06:05:05.755833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:14.106 [2024-07-13 06:05:05.755848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.420 ms 00:18:14.106 [2024-07-13 06:05:05.755859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.106 [2024-07-13 06:05:05.756001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.106 [2024-07-13 06:05:05.756024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:14.106 [2024-07-13 06:05:05.756038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:18:14.106 [2024-07-13 06:05:05.756053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.106 [2024-07-13 06:05:05.756144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.106 [2024-07-13 06:05:05.756159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:14.106 [2024-07-13 06:05:05.756172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:14.106 [2024-07-13 06:05:05.756184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.106 [2024-07-13 06:05:05.756245] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:14.106 [2024-07-13 06:05:05.757685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.106 [2024-07-13 06:05:05.757718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:14.106 [2024-07-13 06:05:05.757731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.449 ms 00:18:14.106 [2024-07-13 06:05:05.757742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.106 [2024-07-13 06:05:05.757803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.106 [2024-07-13 06:05:05.757818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:14.106 [2024-07-13 06:05:05.757831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:14.106 [2024-07-13 06:05:05.757852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.106 [2024-07-13 06:05:05.757881] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:14.106 [2024-07-13 06:05:05.757914] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:14.106 [2024-07-13 06:05:05.757967] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:14.106 [2024-07-13 06:05:05.757997] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:14.106 [2024-07-13 06:05:05.758097] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:14.106 [2024-07-13 06:05:05.758126] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:14.106 [2024-07-13 06:05:05.758181] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:14.106 [2024-07-13 06:05:05.758196] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:14.106 [2024-07-13 06:05:05.758209] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:14.106 [2024-07-13 06:05:05.758221] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:14.106 [2024-07-13 06:05:05.758240] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:14.106 [2024-07-13 06:05:05.758251] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:14.106 [2024-07-13 06:05:05.758271] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:14.106 [2024-07-13 06:05:05.758283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.106 [2024-07-13 06:05:05.758294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:14.106 [2024-07-13 06:05:05.758306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:18:14.106 [2024-07-13 06:05:05.758320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.106 [2024-07-13 06:05:05.758415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.106 [2024-07-13 06:05:05.758430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:14.106 [2024-07-13 06:05:05.758442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:18:14.106 [2024-07-13 06:05:05.758467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.106 [2024-07-13 06:05:05.758582] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:14.106 [2024-07-13 06:05:05.758607] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:14.106 [2024-07-13 06:05:05.758619] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:14.106 [2024-07-13 06:05:05.758630] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.106 [2024-07-13 06:05:05.758641] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:14.106 [2024-07-13 06:05:05.758651] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:14.106 [2024-07-13 06:05:05.758661] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:14.106 [2024-07-13 06:05:05.758672] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:14.106 [2024-07-13 06:05:05.758683] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:14.106 [2024-07-13 06:05:05.758692] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:14.106 [2024-07-13 06:05:05.758702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:14.106 [2024-07-13 06:05:05.758715] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:14.106 [2024-07-13 06:05:05.758726] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:14.106 [2024-07-13 06:05:05.758735] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:14.106 [2024-07-13 06:05:05.758745] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:14.106 [2024-07-13 06:05:05.758755] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.106 [2024-07-13 06:05:05.758765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:14.106 [2024-07-13 06:05:05.758774] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:14.106 [2024-07-13 06:05:05.758784] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.106 [2024-07-13 06:05:05.758794] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:14.106 [2024-07-13 06:05:05.758804] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:14.106 [2024-07-13 06:05:05.758814] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:14.106 [2024-07-13 06:05:05.758824] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:14.106 [2024-07-13 06:05:05.758834] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:14.106 [2024-07-13 06:05:05.758844] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:14.106 [2024-07-13 06:05:05.758853] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:14.106 [2024-07-13 06:05:05.758864] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:14.106 [2024-07-13 06:05:05.758878] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:14.107 [2024-07-13 06:05:05.758888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:14.107 [2024-07-13 06:05:05.758898] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:14.107 [2024-07-13 06:05:05.758907] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:14.107 [2024-07-13 06:05:05.758917] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:14.107 [2024-07-13 06:05:05.758927] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:14.107 [2024-07-13 06:05:05.758936] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:14.107 [2024-07-13 06:05:05.758946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:14.107 [2024-07-13 06:05:05.758955] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:14.107 [2024-07-13 06:05:05.758965] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:14.107 [2024-07-13 06:05:05.758975] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:14.107 [2024-07-13 06:05:05.758984] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:14.107 [2024-07-13 06:05:05.758994] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.107 [2024-07-13 06:05:05.759003] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:14.107 [2024-07-13 06:05:05.759013] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:14.107 [2024-07-13 06:05:05.759024] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.107 [2024-07-13 06:05:05.759036] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:14.107 [2024-07-13 06:05:05.759047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:14.107 [2024-07-13 06:05:05.759057] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:14.107 [2024-07-13 06:05:05.759067] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.107 [2024-07-13 06:05:05.759078] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:14.107 [2024-07-13 06:05:05.759089] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:14.107 [2024-07-13 06:05:05.759098] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:14.107 [2024-07-13 06:05:05.759108] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:14.107 [2024-07-13 06:05:05.759117] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:14.107 [2024-07-13 06:05:05.759127] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:14.107 [2024-07-13 06:05:05.759154] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:14.107 [2024-07-13 06:05:05.759173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:14.107 [2024-07-13 06:05:05.759201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:14.107 [2024-07-13 06:05:05.759213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:14.107 [2024-07-13 06:05:05.759224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:14.107 [2024-07-13 06:05:05.759235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:14.107 [2024-07-13 06:05:05.759249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:14.107 [2024-07-13 06:05:05.759261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:14.107 [2024-07-13 06:05:05.759272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:14.107 [2024-07-13 06:05:05.759282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:14.107 [2024-07-13 06:05:05.759293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:14.107 [2024-07-13 06:05:05.759304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:14.107 [2024-07-13 06:05:05.759314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:14.107 [2024-07-13 06:05:05.759325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:14.107 [2024-07-13 06:05:05.759336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:14.107 [2024-07-13 06:05:05.759347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:14.107 [2024-07-13 06:05:05.759371] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:14.107 [2024-07-13 06:05:05.759383] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:14.107 [2024-07-13 06:05:05.759396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:14.107 [2024-07-13 06:05:05.759407] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:14.107 [2024-07-13 06:05:05.759418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:14.107 [2024-07-13 06:05:05.759430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:14.107 [2024-07-13 06:05:05.759445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.107 [2024-07-13 06:05:05.759466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:14.107 [2024-07-13 06:05:05.759478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.920 ms 00:18:14.107 [2024-07-13 06:05:05.759489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.107 [2024-07-13 06:05:05.778007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.107 [2024-07-13 06:05:05.778078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:14.107 [2024-07-13 06:05:05.778114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.407 ms 00:18:14.107 [2024-07-13 06:05:05.778126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.107 [2024-07-13 06:05:05.778362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.107 [2024-07-13 06:05:05.778384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:14.107 [2024-07-13 06:05:05.778411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:14.107 [2024-07-13 06:05:05.778431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.107 [2024-07-13 06:05:05.787469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.107 [2024-07-13 06:05:05.787542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:14.107 [2024-07-13 06:05:05.787570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.002 ms 00:18:14.107 [2024-07-13 06:05:05.787597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.107 [2024-07-13 06:05:05.787687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.107 [2024-07-13 06:05:05.787708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:14.107 [2024-07-13 06:05:05.787724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:14.107 [2024-07-13 06:05:05.787762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.107 [2024-07-13 06:05:05.788177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.107 [2024-07-13 06:05:05.788207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:14.107 [2024-07-13 06:05:05.788238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:18:14.107 [2024-07-13 06:05:05.788257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.107 [2024-07-13 06:05:05.788483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.107 [2024-07-13 06:05:05.788522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:14.107 [2024-07-13 06:05:05.788545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:18:14.107 [2024-07-13 06:05:05.788559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.107 [2024-07-13 06:05:05.794297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.107 [2024-07-13 06:05:05.794355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:14.107 [2024-07-13 06:05:05.794371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.701 ms 00:18:14.107 [2024-07-13 06:05:05.794381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.107 [2024-07-13 06:05:05.796895] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:14.107 [2024-07-13 06:05:05.796967] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:14.107 [2024-07-13 06:05:05.796994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.107 [2024-07-13 06:05:05.797005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:14.107 [2024-07-13 06:05:05.797017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.494 ms 00:18:14.107 [2024-07-13 06:05:05.797028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.107 [2024-07-13 06:05:05.811319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.107 [2024-07-13 06:05:05.811372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:14.107 [2024-07-13 06:05:05.811393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.237 ms 00:18:14.107 [2024-07-13 06:05:05.811408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.107 [2024-07-13 06:05:05.813379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.107 [2024-07-13 06:05:05.813432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:14.107 [2024-07-13 06:05:05.813451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.875 ms 00:18:14.107 [2024-07-13 06:05:05.813462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.107 [2024-07-13 06:05:05.815148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.107 [2024-07-13 06:05:05.815209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:14.107 [2024-07-13 06:05:05.815224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.589 ms 00:18:14.107 [2024-07-13 06:05:05.815234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.107 [2024-07-13 06:05:05.815654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.107 [2024-07-13 06:05:05.815682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:14.107 [2024-07-13 06:05:05.815697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:18:14.107 [2024-07-13 06:05:05.815708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.366 [2024-07-13 06:05:05.833829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.366 [2024-07-13 06:05:05.833940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:14.366 [2024-07-13 06:05:05.833993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.069 ms 00:18:14.367 [2024-07-13 06:05:05.834021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.367 [2024-07-13 06:05:05.843961] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:14.367 [2024-07-13 06:05:05.857361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.367 [2024-07-13 06:05:05.857444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:14.367 [2024-07-13 06:05:05.857475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.115 ms 00:18:14.367 [2024-07-13 06:05:05.857487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.367 [2024-07-13 06:05:05.857660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.367 [2024-07-13 06:05:05.857678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:14.367 [2024-07-13 06:05:05.857690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:14.367 [2024-07-13 06:05:05.857711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.367 [2024-07-13 06:05:05.857821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.367 [2024-07-13 06:05:05.857837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:14.367 [2024-07-13 06:05:05.857850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:18:14.367 [2024-07-13 06:05:05.857861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.367 [2024-07-13 06:05:05.857912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.367 [2024-07-13 06:05:05.857931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:14.367 [2024-07-13 06:05:05.857953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:14.367 [2024-07-13 06:05:05.857965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.367 [2024-07-13 06:05:05.858002] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:14.367 [2024-07-13 06:05:05.858018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.367 [2024-07-13 06:05:05.858029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:14.367 [2024-07-13 06:05:05.858040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:14.367 [2024-07-13 06:05:05.858052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.367 [2024-07-13 06:05:05.861647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.367 [2024-07-13 06:05:05.861697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:14.367 [2024-07-13 06:05:05.861719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.553 ms 00:18:14.367 [2024-07-13 06:05:05.861730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.367 [2024-07-13 06:05:05.861830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.367 [2024-07-13 06:05:05.861849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:14.367 [2024-07-13 06:05:05.861861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:14.367 [2024-07-13 06:05:05.861871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.367 [2024-07-13 06:05:05.862889] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:14.367 [2024-07-13 06:05:05.864288] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 118.725 ms, result 0 00:18:14.367 [2024-07-13 06:05:05.865109] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:14.367 [2024-07-13 06:05:05.874500] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:25.556  Copying: 25/256 [MB] (25 MBps) Copying: 48/256 [MB] (22 MBps) Copying: 71/256 [MB] (23 MBps) Copying: 94/256 [MB] (22 MBps) Copying: 116/256 [MB] (22 MBps) Copying: 139/256 [MB] (22 MBps) Copying: 162/256 [MB] (22 MBps) Copying: 185/256 [MB] (22 MBps) Copying: 208/256 [MB] (23 MBps) Copying: 231/256 [MB] (23 MBps) Copying: 255/256 [MB] (23 MBps) Copying: 256/256 [MB] (average 23 MBps)[2024-07-13 06:05:17.180826] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:25.556 [2024-07-13 06:05:17.182224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.556 [2024-07-13 06:05:17.182276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:25.556 [2024-07-13 06:05:17.182302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:25.556 [2024-07-13 06:05:17.182318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.556 [2024-07-13 06:05:17.182357] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:25.556 [2024-07-13 06:05:17.182962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.556 [2024-07-13 06:05:17.183007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:25.556 [2024-07-13 06:05:17.183027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.579 ms 00:18:25.556 [2024-07-13 06:05:17.183043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.556 [2024-07-13 06:05:17.183462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.556 [2024-07-13 06:05:17.183501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:25.556 [2024-07-13 06:05:17.183520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:18:25.556 [2024-07-13 06:05:17.183535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.556 [2024-07-13 06:05:17.189790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.556 [2024-07-13 06:05:17.189831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:25.556 [2024-07-13 06:05:17.189855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.225 ms 00:18:25.556 [2024-07-13 06:05:17.189871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.556 [2024-07-13 06:05:17.198166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.556 [2024-07-13 06:05:17.198215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:25.556 [2024-07-13 06:05:17.198247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.263 ms 00:18:25.556 [2024-07-13 06:05:17.198258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.556 [2024-07-13 06:05:17.199703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.556 [2024-07-13 06:05:17.199760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:25.556 [2024-07-13 06:05:17.199793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.382 ms 00:18:25.556 [2024-07-13 06:05:17.199804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.556 [2024-07-13 06:05:17.202986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.556 [2024-07-13 06:05:17.203053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:25.556 [2024-07-13 06:05:17.203084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.155 ms 00:18:25.556 [2024-07-13 06:05:17.203094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.556 [2024-07-13 06:05:17.203236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.556 [2024-07-13 06:05:17.203260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:25.556 [2024-07-13 06:05:17.203273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:18:25.556 [2024-07-13 06:05:17.203284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.556 [2024-07-13 06:05:17.205094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.556 [2024-07-13 06:05:17.205222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:25.556 [2024-07-13 06:05:17.205240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.729 ms 00:18:25.556 [2024-07-13 06:05:17.205251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.556 [2024-07-13 06:05:17.206834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.556 [2024-07-13 06:05:17.206884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:25.556 [2024-07-13 06:05:17.206898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.556 ms 00:18:25.556 [2024-07-13 06:05:17.206924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.556 [2024-07-13 06:05:17.208236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.556 [2024-07-13 06:05:17.208309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:25.556 [2024-07-13 06:05:17.208338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.287 ms 00:18:25.556 [2024-07-13 06:05:17.208347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.556 [2024-07-13 06:05:17.209723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.556 [2024-07-13 06:05:17.209772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:25.556 [2024-07-13 06:05:17.209802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.303 ms 00:18:25.556 [2024-07-13 06:05:17.209812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.556 [2024-07-13 06:05:17.209835] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:25.556 [2024-07-13 06:05:17.209855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:25.556 [2024-07-13 06:05:17.209869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:25.556 [2024-07-13 06:05:17.209881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:25.556 [2024-07-13 06:05:17.209892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:25.556 [2024-07-13 06:05:17.209903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:25.556 [2024-07-13 06:05:17.209914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:25.556 [2024-07-13 06:05:17.209925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:25.556 [2024-07-13 06:05:17.209935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:25.556 [2024-07-13 06:05:17.209946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:25.556 [2024-07-13 06:05:17.209957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:25.556 [2024-07-13 06:05:17.209968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:25.556 [2024-07-13 06:05:17.209979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:25.556 [2024-07-13 06:05:17.210007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:25.556 [2024-07-13 06:05:17.210018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:25.556 [2024-07-13 06:05:17.210046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:25.556 [2024-07-13 06:05:17.210057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:25.556 [2024-07-13 06:05:17.210069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:25.556 [2024-07-13 06:05:17.210081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:25.556 [2024-07-13 06:05:17.210094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:25.556 [2024-07-13 06:05:17.210105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:25.556 [2024-07-13 06:05:17.210117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:25.556 [2024-07-13 06:05:17.210129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.210998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.211010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.211021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.211033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.211045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.211056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.211069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.211081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:25.557 [2024-07-13 06:05:17.211102] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:25.557 [2024-07-13 06:05:17.211116] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1a49c5a3-2b59-4791-9c81-0428a1736fe5 00:18:25.557 [2024-07-13 06:05:17.211142] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:25.557 [2024-07-13 06:05:17.211157] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:25.557 [2024-07-13 06:05:17.211168] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:25.557 [2024-07-13 06:05:17.211185] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:25.557 [2024-07-13 06:05:17.211196] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:25.557 [2024-07-13 06:05:17.211221] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:25.557 [2024-07-13 06:05:17.211237] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:25.557 [2024-07-13 06:05:17.211247] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:25.557 [2024-07-13 06:05:17.211257] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:25.557 [2024-07-13 06:05:17.211269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.557 [2024-07-13 06:05:17.211281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:25.557 [2024-07-13 06:05:17.211301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.435 ms 00:18:25.557 [2024-07-13 06:05:17.211313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.557 [2024-07-13 06:05:17.212716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.557 [2024-07-13 06:05:17.212767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:25.557 [2024-07-13 06:05:17.212781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.377 ms 00:18:25.557 [2024-07-13 06:05:17.212793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.558 [2024-07-13 06:05:17.212885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.558 [2024-07-13 06:05:17.212900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:25.558 [2024-07-13 06:05:17.212913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:18:25.558 [2024-07-13 06:05:17.212923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.558 [2024-07-13 06:05:17.217577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.558 [2024-07-13 06:05:17.217647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:25.558 [2024-07-13 06:05:17.217661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.558 [2024-07-13 06:05:17.217672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.558 [2024-07-13 06:05:17.217734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.558 [2024-07-13 06:05:17.217747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:25.558 [2024-07-13 06:05:17.217758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.558 [2024-07-13 06:05:17.217769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.558 [2024-07-13 06:05:17.217819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.558 [2024-07-13 06:05:17.217836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:25.558 [2024-07-13 06:05:17.217853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.558 [2024-07-13 06:05:17.217879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.558 [2024-07-13 06:05:17.217935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.558 [2024-07-13 06:05:17.217948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:25.558 [2024-07-13 06:05:17.217970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.558 [2024-07-13 06:05:17.217988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.558 [2024-07-13 06:05:17.225594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.558 [2024-07-13 06:05:17.225683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:25.558 [2024-07-13 06:05:17.225726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.558 [2024-07-13 06:05:17.225737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.558 [2024-07-13 06:05:17.232211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.558 [2024-07-13 06:05:17.232274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:25.558 [2024-07-13 06:05:17.232305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.558 [2024-07-13 06:05:17.232317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.558 [2024-07-13 06:05:17.232369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.558 [2024-07-13 06:05:17.232397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:25.558 [2024-07-13 06:05:17.232417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.558 [2024-07-13 06:05:17.232432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.558 [2024-07-13 06:05:17.232466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.558 [2024-07-13 06:05:17.232478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:25.558 [2024-07-13 06:05:17.232490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.558 [2024-07-13 06:05:17.232500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.558 [2024-07-13 06:05:17.232654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.558 [2024-07-13 06:05:17.232673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:25.558 [2024-07-13 06:05:17.232686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.558 [2024-07-13 06:05:17.232697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.558 [2024-07-13 06:05:17.232770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.558 [2024-07-13 06:05:17.232789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:25.558 [2024-07-13 06:05:17.232801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.558 [2024-07-13 06:05:17.232813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.558 [2024-07-13 06:05:17.232861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.558 [2024-07-13 06:05:17.232883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:25.558 [2024-07-13 06:05:17.232896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.558 [2024-07-13 06:05:17.232908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.558 [2024-07-13 06:05:17.232968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:25.558 [2024-07-13 06:05:17.232997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:25.558 [2024-07-13 06:05:17.233010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:25.558 [2024-07-13 06:05:17.233022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.558 [2024-07-13 06:05:17.233213] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.953 ms, result 0 00:18:25.845 00:18:25.845 00:18:25.845 06:05:17 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:26.413 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:26.413 06:05:17 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:26.413 06:05:17 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:18:26.413 06:05:17 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:26.413 06:05:17 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:26.413 06:05:17 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:26.413 06:05:18 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:26.413 06:05:18 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 90990 00:18:26.413 06:05:18 ftl.ftl_trim -- common/autotest_common.sh@948 -- # '[' -z 90990 ']' 00:18:26.413 06:05:18 ftl.ftl_trim -- common/autotest_common.sh@952 -- # kill -0 90990 00:18:26.413 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (90990) - No such process 00:18:26.413 06:05:18 ftl.ftl_trim -- common/autotest_common.sh@975 -- # echo 'Process with pid 90990 is not found' 00:18:26.413 Process with pid 90990 is not found 00:18:26.413 00:18:26.413 real 0m55.515s 00:18:26.413 user 1m17.258s 00:18:26.413 sys 0m6.158s 00:18:26.413 06:05:18 ftl.ftl_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:26.413 06:05:18 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:26.413 ************************************ 00:18:26.413 END TEST ftl_trim 00:18:26.413 ************************************ 00:18:26.413 06:05:18 ftl -- common/autotest_common.sh@1142 -- # return 0 00:18:26.413 06:05:18 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:26.413 06:05:18 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:26.413 06:05:18 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:26.413 06:05:18 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:26.672 ************************************ 00:18:26.672 START TEST ftl_restore 00:18:26.672 ************************************ 00:18:26.672 06:05:18 ftl.ftl_restore -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:26.672 * Looking for test storage... 00:18:26.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:26.672 06:05:18 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:26.672 06:05:18 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:26.672 06:05:18 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:26.672 06:05:18 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.HgZpQuSCqr 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=91216 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 91216 00:18:26.673 06:05:18 ftl.ftl_restore -- common/autotest_common.sh@829 -- # '[' -z 91216 ']' 00:18:26.673 06:05:18 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:26.673 06:05:18 ftl.ftl_restore -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.673 06:05:18 ftl.ftl_restore -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:26.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.673 06:05:18 ftl.ftl_restore -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.673 06:05:18 ftl.ftl_restore -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:26.673 06:05:18 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:18:26.673 [2024-07-13 06:05:18.361935] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:26.673 [2024-07-13 06:05:18.362167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91216 ] 00:18:26.932 [2024-07-13 06:05:18.512345] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.932 [2024-07-13 06:05:18.557951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.868 06:05:19 ftl.ftl_restore -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:27.868 06:05:19 ftl.ftl_restore -- common/autotest_common.sh@862 -- # return 0 00:18:27.868 06:05:19 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:27.868 06:05:19 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:18:27.868 06:05:19 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:27.868 06:05:19 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:18:27.868 06:05:19 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:18:27.868 06:05:19 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:28.127 06:05:19 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:28.127 06:05:19 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:28.127 06:05:19 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:28.127 06:05:19 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:28.127 06:05:19 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:28.127 06:05:19 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:28.127 06:05:19 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:28.127 06:05:19 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:28.385 06:05:19 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:28.385 { 00:18:28.385 "name": "nvme0n1", 00:18:28.385 "aliases": [ 00:18:28.385 "bcf8e9af-2da4-43e6-8f45-6045a0202981" 00:18:28.385 ], 00:18:28.385 "product_name": "NVMe disk", 00:18:28.385 "block_size": 4096, 00:18:28.385 "num_blocks": 1310720, 00:18:28.385 "uuid": "bcf8e9af-2da4-43e6-8f45-6045a0202981", 00:18:28.385 "assigned_rate_limits": { 00:18:28.385 "rw_ios_per_sec": 0, 00:18:28.385 "rw_mbytes_per_sec": 0, 00:18:28.385 "r_mbytes_per_sec": 0, 00:18:28.386 "w_mbytes_per_sec": 0 00:18:28.386 }, 00:18:28.386 "claimed": true, 00:18:28.386 "claim_type": "read_many_write_one", 00:18:28.386 "zoned": false, 00:18:28.386 "supported_io_types": { 00:18:28.386 "read": true, 00:18:28.386 "write": true, 00:18:28.386 "unmap": true, 00:18:28.386 "flush": true, 00:18:28.386 "reset": true, 00:18:28.386 "nvme_admin": true, 00:18:28.386 "nvme_io": true, 00:18:28.386 "nvme_io_md": false, 00:18:28.386 "write_zeroes": true, 00:18:28.386 "zcopy": false, 00:18:28.386 "get_zone_info": false, 00:18:28.386 "zone_management": false, 00:18:28.386 "zone_append": false, 00:18:28.386 "compare": true, 00:18:28.386 "compare_and_write": false, 00:18:28.386 "abort": true, 00:18:28.386 "seek_hole": false, 00:18:28.386 "seek_data": false, 00:18:28.386 "copy": true, 00:18:28.386 "nvme_iov_md": false 00:18:28.386 }, 00:18:28.386 "driver_specific": { 00:18:28.386 "nvme": [ 00:18:28.386 { 00:18:28.386 "pci_address": "0000:00:11.0", 00:18:28.386 "trid": { 00:18:28.386 "trtype": "PCIe", 00:18:28.386 "traddr": "0000:00:11.0" 00:18:28.386 }, 00:18:28.386 "ctrlr_data": { 00:18:28.386 "cntlid": 0, 00:18:28.386 "vendor_id": "0x1b36", 00:18:28.386 "model_number": "QEMU NVMe Ctrl", 00:18:28.386 "serial_number": "12341", 00:18:28.386 "firmware_revision": "8.0.0", 00:18:28.386 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:28.386 "oacs": { 00:18:28.386 "security": 0, 00:18:28.386 "format": 1, 00:18:28.386 "firmware": 0, 00:18:28.386 "ns_manage": 1 00:18:28.386 }, 00:18:28.386 "multi_ctrlr": false, 00:18:28.386 "ana_reporting": false 00:18:28.386 }, 00:18:28.386 "vs": { 00:18:28.386 "nvme_version": "1.4" 00:18:28.386 }, 00:18:28.386 "ns_data": { 00:18:28.386 "id": 1, 00:18:28.386 "can_share": false 00:18:28.386 } 00:18:28.386 } 00:18:28.386 ], 00:18:28.386 "mp_policy": "active_passive" 00:18:28.386 } 00:18:28.386 } 00:18:28.386 ]' 00:18:28.386 06:05:19 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:28.386 06:05:19 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:28.386 06:05:19 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:28.386 06:05:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:28.386 06:05:20 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:28.386 06:05:20 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:18:28.386 06:05:20 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:28.386 06:05:20 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:28.386 06:05:20 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:28.386 06:05:20 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:28.386 06:05:20 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:28.644 06:05:20 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=8095fdbf-f51a-4b5d-a213-75c37daa9449 00:18:28.644 06:05:20 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:28.644 06:05:20 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8095fdbf-f51a-4b5d-a213-75c37daa9449 00:18:28.903 06:05:20 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:29.162 06:05:20 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=c6d74a56-0b76-4eb7-8058-6a682783f761 00:18:29.162 06:05:20 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c6d74a56-0b76-4eb7-8058-6a682783f761 00:18:29.420 06:05:21 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=b51c9127-469e-4c8b-abe2-dc2a8d9380fe 00:18:29.420 06:05:21 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:29.420 06:05:21 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b51c9127-469e-4c8b-abe2-dc2a8d9380fe 00:18:29.420 06:05:21 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:29.420 06:05:21 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:29.420 06:05:21 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=b51c9127-469e-4c8b-abe2-dc2a8d9380fe 00:18:29.420 06:05:21 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:29.420 06:05:21 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size b51c9127-469e-4c8b-abe2-dc2a8d9380fe 00:18:29.420 06:05:21 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=b51c9127-469e-4c8b-abe2-dc2a8d9380fe 00:18:29.420 06:05:21 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:29.420 06:05:21 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:29.420 06:05:21 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:29.421 06:05:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b51c9127-469e-4c8b-abe2-dc2a8d9380fe 00:18:29.679 06:05:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:29.679 { 00:18:29.679 "name": "b51c9127-469e-4c8b-abe2-dc2a8d9380fe", 00:18:29.679 "aliases": [ 00:18:29.679 "lvs/nvme0n1p0" 00:18:29.679 ], 00:18:29.679 "product_name": "Logical Volume", 00:18:29.679 "block_size": 4096, 00:18:29.679 "num_blocks": 26476544, 00:18:29.679 "uuid": "b51c9127-469e-4c8b-abe2-dc2a8d9380fe", 00:18:29.679 "assigned_rate_limits": { 00:18:29.679 "rw_ios_per_sec": 0, 00:18:29.679 "rw_mbytes_per_sec": 0, 00:18:29.679 "r_mbytes_per_sec": 0, 00:18:29.680 "w_mbytes_per_sec": 0 00:18:29.680 }, 00:18:29.680 "claimed": false, 00:18:29.680 "zoned": false, 00:18:29.680 "supported_io_types": { 00:18:29.680 "read": true, 00:18:29.680 "write": true, 00:18:29.680 "unmap": true, 00:18:29.680 "flush": false, 00:18:29.680 "reset": true, 00:18:29.680 "nvme_admin": false, 00:18:29.680 "nvme_io": false, 00:18:29.680 "nvme_io_md": false, 00:18:29.680 "write_zeroes": true, 00:18:29.680 "zcopy": false, 00:18:29.680 "get_zone_info": false, 00:18:29.680 "zone_management": false, 00:18:29.680 "zone_append": false, 00:18:29.680 "compare": false, 00:18:29.680 "compare_and_write": false, 00:18:29.680 "abort": false, 00:18:29.680 "seek_hole": true, 00:18:29.680 "seek_data": true, 00:18:29.680 "copy": false, 00:18:29.680 "nvme_iov_md": false 00:18:29.680 }, 00:18:29.680 "driver_specific": { 00:18:29.680 "lvol": { 00:18:29.680 "lvol_store_uuid": "c6d74a56-0b76-4eb7-8058-6a682783f761", 00:18:29.680 "base_bdev": "nvme0n1", 00:18:29.680 "thin_provision": true, 00:18:29.680 "num_allocated_clusters": 0, 00:18:29.680 "snapshot": false, 00:18:29.680 "clone": false, 00:18:29.680 "esnap_clone": false 00:18:29.680 } 00:18:29.680 } 00:18:29.680 } 00:18:29.680 ]' 00:18:29.680 06:05:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:29.680 06:05:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:29.680 06:05:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:29.680 06:05:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:29.680 06:05:21 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:29.680 06:05:21 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:29.680 06:05:21 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:29.680 06:05:21 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:29.680 06:05:21 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:30.247 06:05:21 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:30.247 06:05:21 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:30.247 06:05:21 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size b51c9127-469e-4c8b-abe2-dc2a8d9380fe 00:18:30.247 06:05:21 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=b51c9127-469e-4c8b-abe2-dc2a8d9380fe 00:18:30.247 06:05:21 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:30.247 06:05:21 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:30.247 06:05:21 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:30.247 06:05:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b51c9127-469e-4c8b-abe2-dc2a8d9380fe 00:18:30.247 06:05:21 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:30.247 { 00:18:30.248 "name": "b51c9127-469e-4c8b-abe2-dc2a8d9380fe", 00:18:30.248 "aliases": [ 00:18:30.248 "lvs/nvme0n1p0" 00:18:30.248 ], 00:18:30.248 "product_name": "Logical Volume", 00:18:30.248 "block_size": 4096, 00:18:30.248 "num_blocks": 26476544, 00:18:30.248 "uuid": "b51c9127-469e-4c8b-abe2-dc2a8d9380fe", 00:18:30.248 "assigned_rate_limits": { 00:18:30.248 "rw_ios_per_sec": 0, 00:18:30.248 "rw_mbytes_per_sec": 0, 00:18:30.248 "r_mbytes_per_sec": 0, 00:18:30.248 "w_mbytes_per_sec": 0 00:18:30.248 }, 00:18:30.248 "claimed": false, 00:18:30.248 "zoned": false, 00:18:30.248 "supported_io_types": { 00:18:30.248 "read": true, 00:18:30.248 "write": true, 00:18:30.248 "unmap": true, 00:18:30.248 "flush": false, 00:18:30.248 "reset": true, 00:18:30.248 "nvme_admin": false, 00:18:30.248 "nvme_io": false, 00:18:30.248 "nvme_io_md": false, 00:18:30.248 "write_zeroes": true, 00:18:30.248 "zcopy": false, 00:18:30.248 "get_zone_info": false, 00:18:30.248 "zone_management": false, 00:18:30.248 "zone_append": false, 00:18:30.248 "compare": false, 00:18:30.248 "compare_and_write": false, 00:18:30.248 "abort": false, 00:18:30.248 "seek_hole": true, 00:18:30.248 "seek_data": true, 00:18:30.248 "copy": false, 00:18:30.248 "nvme_iov_md": false 00:18:30.248 }, 00:18:30.248 "driver_specific": { 00:18:30.248 "lvol": { 00:18:30.248 "lvol_store_uuid": "c6d74a56-0b76-4eb7-8058-6a682783f761", 00:18:30.248 "base_bdev": "nvme0n1", 00:18:30.248 "thin_provision": true, 00:18:30.248 "num_allocated_clusters": 0, 00:18:30.248 "snapshot": false, 00:18:30.248 "clone": false, 00:18:30.248 "esnap_clone": false 00:18:30.248 } 00:18:30.248 } 00:18:30.248 } 00:18:30.248 ]' 00:18:30.248 06:05:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:30.507 06:05:21 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:30.507 06:05:21 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:30.507 06:05:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:30.507 06:05:22 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:30.507 06:05:22 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:30.507 06:05:22 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:30.507 06:05:22 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:30.766 06:05:22 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:30.766 06:05:22 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size b51c9127-469e-4c8b-abe2-dc2a8d9380fe 00:18:30.766 06:05:22 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=b51c9127-469e-4c8b-abe2-dc2a8d9380fe 00:18:30.766 06:05:22 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:30.766 06:05:22 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:30.766 06:05:22 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:30.766 06:05:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b51c9127-469e-4c8b-abe2-dc2a8d9380fe 00:18:31.025 06:05:22 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:31.025 { 00:18:31.025 "name": "b51c9127-469e-4c8b-abe2-dc2a8d9380fe", 00:18:31.025 "aliases": [ 00:18:31.025 "lvs/nvme0n1p0" 00:18:31.025 ], 00:18:31.025 "product_name": "Logical Volume", 00:18:31.025 "block_size": 4096, 00:18:31.025 "num_blocks": 26476544, 00:18:31.025 "uuid": "b51c9127-469e-4c8b-abe2-dc2a8d9380fe", 00:18:31.025 "assigned_rate_limits": { 00:18:31.025 "rw_ios_per_sec": 0, 00:18:31.025 "rw_mbytes_per_sec": 0, 00:18:31.025 "r_mbytes_per_sec": 0, 00:18:31.025 "w_mbytes_per_sec": 0 00:18:31.025 }, 00:18:31.025 "claimed": false, 00:18:31.025 "zoned": false, 00:18:31.025 "supported_io_types": { 00:18:31.025 "read": true, 00:18:31.025 "write": true, 00:18:31.025 "unmap": true, 00:18:31.025 "flush": false, 00:18:31.025 "reset": true, 00:18:31.025 "nvme_admin": false, 00:18:31.025 "nvme_io": false, 00:18:31.025 "nvme_io_md": false, 00:18:31.025 "write_zeroes": true, 00:18:31.025 "zcopy": false, 00:18:31.025 "get_zone_info": false, 00:18:31.025 "zone_management": false, 00:18:31.025 "zone_append": false, 00:18:31.025 "compare": false, 00:18:31.025 "compare_and_write": false, 00:18:31.025 "abort": false, 00:18:31.025 "seek_hole": true, 00:18:31.025 "seek_data": true, 00:18:31.025 "copy": false, 00:18:31.025 "nvme_iov_md": false 00:18:31.025 }, 00:18:31.025 "driver_specific": { 00:18:31.025 "lvol": { 00:18:31.025 "lvol_store_uuid": "c6d74a56-0b76-4eb7-8058-6a682783f761", 00:18:31.025 "base_bdev": "nvme0n1", 00:18:31.025 "thin_provision": true, 00:18:31.025 "num_allocated_clusters": 0, 00:18:31.025 "snapshot": false, 00:18:31.025 "clone": false, 00:18:31.025 "esnap_clone": false 00:18:31.025 } 00:18:31.025 } 00:18:31.025 } 00:18:31.025 ]' 00:18:31.025 06:05:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:31.025 06:05:22 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:31.025 06:05:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:31.025 06:05:22 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:31.025 06:05:22 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:31.025 06:05:22 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:31.025 06:05:22 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:31.025 06:05:22 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d b51c9127-469e-4c8b-abe2-dc2a8d9380fe --l2p_dram_limit 10' 00:18:31.025 06:05:22 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:31.025 06:05:22 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:31.025 06:05:22 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:31.025 06:05:22 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:31.025 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:31.025 06:05:22 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b51c9127-469e-4c8b-abe2-dc2a8d9380fe --l2p_dram_limit 10 -c nvc0n1p0 00:18:31.284 [2024-07-13 06:05:22.923821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.284 [2024-07-13 06:05:22.923892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:31.284 [2024-07-13 06:05:22.923931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:18:31.285 [2024-07-13 06:05:22.923945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.285 [2024-07-13 06:05:22.924035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.285 [2024-07-13 06:05:22.924059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:31.285 [2024-07-13 06:05:22.924077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:18:31.285 [2024-07-13 06:05:22.924089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.285 [2024-07-13 06:05:22.924146] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:31.285 [2024-07-13 06:05:22.924509] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:31.285 [2024-07-13 06:05:22.924553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.285 [2024-07-13 06:05:22.924567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:31.285 [2024-07-13 06:05:22.924583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:18:31.285 [2024-07-13 06:05:22.924596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.285 [2024-07-13 06:05:22.924834] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b0bf905e-23fc-4217-a19b-6c8f1c14c1b0 00:18:31.285 [2024-07-13 06:05:22.925874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.285 [2024-07-13 06:05:22.925935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:31.285 [2024-07-13 06:05:22.925970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:31.285 [2024-07-13 06:05:22.925985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.285 [2024-07-13 06:05:22.930744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.285 [2024-07-13 06:05:22.930825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:31.285 [2024-07-13 06:05:22.930842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.669 ms 00:18:31.285 [2024-07-13 06:05:22.930856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.285 [2024-07-13 06:05:22.930966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.285 [2024-07-13 06:05:22.930992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:31.285 [2024-07-13 06:05:22.931005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:31.285 [2024-07-13 06:05:22.931019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.285 [2024-07-13 06:05:22.931123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.285 [2024-07-13 06:05:22.931149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:31.285 [2024-07-13 06:05:22.931163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:31.285 [2024-07-13 06:05:22.931193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.285 [2024-07-13 06:05:22.931240] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:31.285 [2024-07-13 06:05:22.932798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.285 [2024-07-13 06:05:22.932855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:31.285 [2024-07-13 06:05:22.932898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.570 ms 00:18:31.285 [2024-07-13 06:05:22.932917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.285 [2024-07-13 06:05:22.932970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.285 [2024-07-13 06:05:22.932986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:31.285 [2024-07-13 06:05:22.933000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:31.285 [2024-07-13 06:05:22.933011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.285 [2024-07-13 06:05:22.933058] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:31.285 [2024-07-13 06:05:22.933298] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:31.285 [2024-07-13 06:05:22.933326] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:31.285 [2024-07-13 06:05:22.933342] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:31.285 [2024-07-13 06:05:22.933359] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:31.285 [2024-07-13 06:05:22.933373] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:31.285 [2024-07-13 06:05:22.933388] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:31.285 [2024-07-13 06:05:22.933402] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:31.285 [2024-07-13 06:05:22.933415] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:31.285 [2024-07-13 06:05:22.933426] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:31.285 [2024-07-13 06:05:22.933440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.285 [2024-07-13 06:05:22.933452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:31.285 [2024-07-13 06:05:22.933467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:18:31.285 [2024-07-13 06:05:22.933479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.285 [2024-07-13 06:05:22.933577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.285 [2024-07-13 06:05:22.933592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:31.285 [2024-07-13 06:05:22.933610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:31.285 [2024-07-13 06:05:22.933621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.285 [2024-07-13 06:05:22.933752] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:31.285 [2024-07-13 06:05:22.933781] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:31.285 [2024-07-13 06:05:22.933798] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:31.285 [2024-07-13 06:05:22.933820] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.285 [2024-07-13 06:05:22.933843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:31.285 [2024-07-13 06:05:22.933854] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:31.285 [2024-07-13 06:05:22.933868] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:31.285 [2024-07-13 06:05:22.933879] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:31.285 [2024-07-13 06:05:22.933893] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:31.285 [2024-07-13 06:05:22.933904] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:31.285 [2024-07-13 06:05:22.933916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:31.285 [2024-07-13 06:05:22.933927] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:31.285 [2024-07-13 06:05:22.933939] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:31.285 [2024-07-13 06:05:22.933950] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:31.285 [2024-07-13 06:05:22.933966] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:31.285 [2024-07-13 06:05:22.933977] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.285 [2024-07-13 06:05:22.933989] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:31.285 [2024-07-13 06:05:22.934000] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:31.285 [2024-07-13 06:05:22.934014] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.285 [2024-07-13 06:05:22.934025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:31.285 [2024-07-13 06:05:22.934038] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:31.285 [2024-07-13 06:05:22.934049] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:31.285 [2024-07-13 06:05:22.934062] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:31.285 [2024-07-13 06:05:22.934073] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:31.285 [2024-07-13 06:05:22.934088] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:31.285 [2024-07-13 06:05:22.934099] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:31.285 [2024-07-13 06:05:22.934112] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:31.285 [2024-07-13 06:05:22.934122] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:31.285 [2024-07-13 06:05:22.934161] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:31.285 [2024-07-13 06:05:22.934176] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:31.285 [2024-07-13 06:05:22.934192] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:31.285 [2024-07-13 06:05:22.934203] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:31.285 [2024-07-13 06:05:22.934215] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:31.285 [2024-07-13 06:05:22.934226] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:31.285 [2024-07-13 06:05:22.934238] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:31.285 [2024-07-13 06:05:22.934249] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:31.285 [2024-07-13 06:05:22.934262] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:31.285 [2024-07-13 06:05:22.934273] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:31.285 [2024-07-13 06:05:22.934285] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:31.285 [2024-07-13 06:05:22.934296] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.285 [2024-07-13 06:05:22.934308] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:31.285 [2024-07-13 06:05:22.934319] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:31.285 [2024-07-13 06:05:22.934332] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.285 [2024-07-13 06:05:22.934342] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:31.285 [2024-07-13 06:05:22.934356] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:31.285 [2024-07-13 06:05:22.934372] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:31.285 [2024-07-13 06:05:22.934387] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.285 [2024-07-13 06:05:22.934399] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:31.285 [2024-07-13 06:05:22.934412] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:31.285 [2024-07-13 06:05:22.934423] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:31.285 [2024-07-13 06:05:22.934436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:31.285 [2024-07-13 06:05:22.934446] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:31.285 [2024-07-13 06:05:22.934459] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:31.285 [2024-07-13 06:05:22.934486] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:31.285 [2024-07-13 06:05:22.934506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:31.285 [2024-07-13 06:05:22.934519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:31.286 [2024-07-13 06:05:22.934534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:31.286 [2024-07-13 06:05:22.934546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:31.286 [2024-07-13 06:05:22.934559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:31.286 [2024-07-13 06:05:22.934570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:31.286 [2024-07-13 06:05:22.934584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:31.286 [2024-07-13 06:05:22.934595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:31.286 [2024-07-13 06:05:22.934611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:31.286 [2024-07-13 06:05:22.934622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:31.286 [2024-07-13 06:05:22.934637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:31.286 [2024-07-13 06:05:22.934649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:31.286 [2024-07-13 06:05:22.934662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:31.286 [2024-07-13 06:05:22.934674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:31.286 [2024-07-13 06:05:22.934688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:31.286 [2024-07-13 06:05:22.934699] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:31.286 [2024-07-13 06:05:22.934714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:31.286 [2024-07-13 06:05:22.934727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:31.286 [2024-07-13 06:05:22.934740] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:31.286 [2024-07-13 06:05:22.934751] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:31.286 [2024-07-13 06:05:22.934765] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:31.286 [2024-07-13 06:05:22.934777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.286 [2024-07-13 06:05:22.934791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:31.286 [2024-07-13 06:05:22.934803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.095 ms 00:18:31.286 [2024-07-13 06:05:22.934819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.286 [2024-07-13 06:05:22.934874] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:31.286 [2024-07-13 06:05:22.934894] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:33.189 [2024-07-13 06:05:24.850844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.189 [2024-07-13 06:05:24.850940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:33.189 [2024-07-13 06:05:24.850989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1915.981 ms 00:18:33.189 [2024-07-13 06:05:24.851006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.189 [2024-07-13 06:05:24.858771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.189 [2024-07-13 06:05:24.858844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:33.189 [2024-07-13 06:05:24.858879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.449 ms 00:18:33.189 [2024-07-13 06:05:24.858893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.189 [2024-07-13 06:05:24.859005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.189 [2024-07-13 06:05:24.859026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:33.189 [2024-07-13 06:05:24.859055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:18:33.189 [2024-07-13 06:05:24.859086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.189 [2024-07-13 06:05:24.867523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.189 [2024-07-13 06:05:24.867591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:33.189 [2024-07-13 06:05:24.867626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.325 ms 00:18:33.189 [2024-07-13 06:05:24.867640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.189 [2024-07-13 06:05:24.867685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.189 [2024-07-13 06:05:24.867704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:33.189 [2024-07-13 06:05:24.867734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:33.189 [2024-07-13 06:05:24.867760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.189 [2024-07-13 06:05:24.868136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.189 [2024-07-13 06:05:24.868185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:33.189 [2024-07-13 06:05:24.868201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:18:33.189 [2024-07-13 06:05:24.868216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.189 [2024-07-13 06:05:24.868369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.189 [2024-07-13 06:05:24.868392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:33.189 [2024-07-13 06:05:24.868406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:18:33.189 [2024-07-13 06:05:24.868419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.189 [2024-07-13 06:05:24.874286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.189 [2024-07-13 06:05:24.874365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:33.189 [2024-07-13 06:05:24.874394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.831 ms 00:18:33.189 [2024-07-13 06:05:24.874410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.189 [2024-07-13 06:05:24.883149] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:33.189 [2024-07-13 06:05:24.885980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.189 [2024-07-13 06:05:24.886029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:33.189 [2024-07-13 06:05:24.886065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.468 ms 00:18:33.189 [2024-07-13 06:05:24.886077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.471 [2024-07-13 06:05:24.938075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.471 [2024-07-13 06:05:24.938196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:33.471 [2024-07-13 06:05:24.938243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.956 ms 00:18:33.471 [2024-07-13 06:05:24.938264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.471 [2024-07-13 06:05:24.938529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.471 [2024-07-13 06:05:24.938552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:33.471 [2024-07-13 06:05:24.938568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:18:33.471 [2024-07-13 06:05:24.938581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.471 [2024-07-13 06:05:24.942461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.471 [2024-07-13 06:05:24.942535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:33.471 [2024-07-13 06:05:24.942571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.846 ms 00:18:33.471 [2024-07-13 06:05:24.942598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.471 [2024-07-13 06:05:24.945804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.471 [2024-07-13 06:05:24.945859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:33.471 [2024-07-13 06:05:24.945895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.154 ms 00:18:33.471 [2024-07-13 06:05:24.945906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.471 [2024-07-13 06:05:24.946325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.471 [2024-07-13 06:05:24.946366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:33.471 [2024-07-13 06:05:24.946385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:18:33.471 [2024-07-13 06:05:24.946398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.471 [2024-07-13 06:05:24.978685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.471 [2024-07-13 06:05:24.978767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:33.471 [2024-07-13 06:05:24.978808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.216 ms 00:18:33.471 [2024-07-13 06:05:24.978824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.472 [2024-07-13 06:05:24.983579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.472 [2024-07-13 06:05:24.983623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:33.472 [2024-07-13 06:05:24.983644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.682 ms 00:18:33.472 [2024-07-13 06:05:24.983657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.472 [2024-07-13 06:05:24.987397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.472 [2024-07-13 06:05:24.987444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:33.472 [2024-07-13 06:05:24.987465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.685 ms 00:18:33.472 [2024-07-13 06:05:24.987477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.472 [2024-07-13 06:05:24.991483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.472 [2024-07-13 06:05:24.991524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:33.472 [2024-07-13 06:05:24.991545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.953 ms 00:18:33.472 [2024-07-13 06:05:24.991557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.472 [2024-07-13 06:05:24.991619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.472 [2024-07-13 06:05:24.991638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:33.472 [2024-07-13 06:05:24.991653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:33.472 [2024-07-13 06:05:24.991666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.472 [2024-07-13 06:05:24.991746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.472 [2024-07-13 06:05:24.991763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:33.472 [2024-07-13 06:05:24.991780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:33.472 [2024-07-13 06:05:24.991794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.472 [2024-07-13 06:05:24.992910] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2068.650 ms, result 0 00:18:33.472 { 00:18:33.472 "name": "ftl0", 00:18:33.472 "uuid": "b0bf905e-23fc-4217-a19b-6c8f1c14c1b0" 00:18:33.472 } 00:18:33.472 06:05:25 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:33.472 06:05:25 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:33.730 06:05:25 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:33.730 06:05:25 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:33.991 [2024-07-13 06:05:25.539360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.991 [2024-07-13 06:05:25.539453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:33.991 [2024-07-13 06:05:25.539476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:33.991 [2024-07-13 06:05:25.539499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.991 [2024-07-13 06:05:25.539542] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:33.991 [2024-07-13 06:05:25.539975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.991 [2024-07-13 06:05:25.540016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:33.991 [2024-07-13 06:05:25.540040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:18:33.991 [2024-07-13 06:05:25.540053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.991 [2024-07-13 06:05:25.540385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.991 [2024-07-13 06:05:25.540411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:33.991 [2024-07-13 06:05:25.540427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:18:33.991 [2024-07-13 06:05:25.540442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.991 [2024-07-13 06:05:25.543722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.991 [2024-07-13 06:05:25.543767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:33.991 [2024-07-13 06:05:25.543800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.254 ms 00:18:33.991 [2024-07-13 06:05:25.543811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.991 [2024-07-13 06:05:25.550313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.991 [2024-07-13 06:05:25.550374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:33.991 [2024-07-13 06:05:25.550406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.475 ms 00:18:33.991 [2024-07-13 06:05:25.550435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.991 [2024-07-13 06:05:25.551877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.991 [2024-07-13 06:05:25.551931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:33.991 [2024-07-13 06:05:25.551968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.327 ms 00:18:33.991 [2024-07-13 06:05:25.551979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.991 [2024-07-13 06:05:25.556129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.991 [2024-07-13 06:05:25.556214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:33.991 [2024-07-13 06:05:25.556255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.103 ms 00:18:33.991 [2024-07-13 06:05:25.556267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.991 [2024-07-13 06:05:25.556406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.991 [2024-07-13 06:05:25.556425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:33.991 [2024-07-13 06:05:25.556476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:18:33.991 [2024-07-13 06:05:25.556488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.991 [2024-07-13 06:05:25.558276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.991 [2024-07-13 06:05:25.558327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:33.991 [2024-07-13 06:05:25.558360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.760 ms 00:18:33.991 [2024-07-13 06:05:25.558371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.992 [2024-07-13 06:05:25.559883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.992 [2024-07-13 06:05:25.559921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:33.992 [2024-07-13 06:05:25.559956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.467 ms 00:18:33.992 [2024-07-13 06:05:25.559967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.992 [2024-07-13 06:05:25.561266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.992 [2024-07-13 06:05:25.561306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:33.992 [2024-07-13 06:05:25.561325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.253 ms 00:18:33.992 [2024-07-13 06:05:25.561337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.992 [2024-07-13 06:05:25.562570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.992 [2024-07-13 06:05:25.562608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:33.992 [2024-07-13 06:05:25.562642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.153 ms 00:18:33.992 [2024-07-13 06:05:25.562653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.992 [2024-07-13 06:05:25.562699] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:33.992 [2024-07-13 06:05:25.562723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.562752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.562764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.562793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.562805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.562821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.562833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.562846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.562858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.562871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.562883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.562896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.562908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.562921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.562933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.562952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.562963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.562976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.562988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:33.992 [2024-07-13 06:05:25.563839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:33.993 [2024-07-13 06:05:25.563855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:33.993 [2024-07-13 06:05:25.563866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:33.993 [2024-07-13 06:05:25.563879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:33.993 [2024-07-13 06:05:25.563891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:33.993 [2024-07-13 06:05:25.563904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:33.993 [2024-07-13 06:05:25.563915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:33.993 [2024-07-13 06:05:25.563928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:33.993 [2024-07-13 06:05:25.563939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:33.993 [2024-07-13 06:05:25.563952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:33.993 [2024-07-13 06:05:25.563964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:33.993 [2024-07-13 06:05:25.563977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:33.993 [2024-07-13 06:05:25.563989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:33.993 [2024-07-13 06:05:25.564004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:33.993 [2024-07-13 06:05:25.564016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:33.993 [2024-07-13 06:05:25.564029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:33.993 [2024-07-13 06:05:25.564049] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:33.993 [2024-07-13 06:05:25.564064] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b0bf905e-23fc-4217-a19b-6c8f1c14c1b0 00:18:33.993 [2024-07-13 06:05:25.564076] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:33.993 [2024-07-13 06:05:25.564090] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:33.993 [2024-07-13 06:05:25.564124] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:33.993 [2024-07-13 06:05:25.564154] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:33.993 [2024-07-13 06:05:25.564178] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:33.993 [2024-07-13 06:05:25.564195] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:33.993 [2024-07-13 06:05:25.564206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:33.993 [2024-07-13 06:05:25.564218] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:33.993 [2024-07-13 06:05:25.564228] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:33.993 [2024-07-13 06:05:25.564241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.993 [2024-07-13 06:05:25.564260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:33.993 [2024-07-13 06:05:25.564275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.548 ms 00:18:33.993 [2024-07-13 06:05:25.564287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.993 [2024-07-13 06:05:25.565807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.993 [2024-07-13 06:05:25.565854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:33.993 [2024-07-13 06:05:25.565873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.473 ms 00:18:33.993 [2024-07-13 06:05:25.565886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.993 [2024-07-13 06:05:25.565967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.993 [2024-07-13 06:05:25.565982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:33.993 [2024-07-13 06:05:25.565996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:33.993 [2024-07-13 06:05:25.566007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.993 [2024-07-13 06:05:25.571227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.993 [2024-07-13 06:05:25.571283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:33.993 [2024-07-13 06:05:25.571318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.993 [2024-07-13 06:05:25.571332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.993 [2024-07-13 06:05:25.571396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.993 [2024-07-13 06:05:25.571411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:33.993 [2024-07-13 06:05:25.571436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.993 [2024-07-13 06:05:25.571448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.993 [2024-07-13 06:05:25.571585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.993 [2024-07-13 06:05:25.571606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:33.993 [2024-07-13 06:05:25.571624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.993 [2024-07-13 06:05:25.571636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.993 [2024-07-13 06:05:25.571667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.993 [2024-07-13 06:05:25.571681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:33.993 [2024-07-13 06:05:25.571695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.993 [2024-07-13 06:05:25.571706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.993 [2024-07-13 06:05:25.580033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.993 [2024-07-13 06:05:25.580103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:33.993 [2024-07-13 06:05:25.580141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.993 [2024-07-13 06:05:25.580168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.993 [2024-07-13 06:05:25.586960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.993 [2024-07-13 06:05:25.587023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:33.993 [2024-07-13 06:05:25.587058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.993 [2024-07-13 06:05:25.587070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.993 [2024-07-13 06:05:25.587158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.993 [2024-07-13 06:05:25.587191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:33.993 [2024-07-13 06:05:25.587208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.993 [2024-07-13 06:05:25.587220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.993 [2024-07-13 06:05:25.587316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.993 [2024-07-13 06:05:25.587359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:33.993 [2024-07-13 06:05:25.587374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.993 [2024-07-13 06:05:25.587386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.993 [2024-07-13 06:05:25.587486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.993 [2024-07-13 06:05:25.587522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:33.993 [2024-07-13 06:05:25.587538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.993 [2024-07-13 06:05:25.587549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.993 [2024-07-13 06:05:25.587608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.993 [2024-07-13 06:05:25.587628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:33.993 [2024-07-13 06:05:25.587643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.993 [2024-07-13 06:05:25.587657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.993 [2024-07-13 06:05:25.587721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.993 [2024-07-13 06:05:25.587738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:33.993 [2024-07-13 06:05:25.587755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.993 [2024-07-13 06:05:25.587767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.993 [2024-07-13 06:05:25.587842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.993 [2024-07-13 06:05:25.587858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:33.993 [2024-07-13 06:05:25.587872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.993 [2024-07-13 06:05:25.587884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.993 [2024-07-13 06:05:25.588054] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 48.648 ms, result 0 00:18:33.993 true 00:18:33.993 06:05:25 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 91216 00:18:33.993 06:05:25 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 91216 ']' 00:18:33.993 06:05:25 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 91216 00:18:33.993 06:05:25 ftl.ftl_restore -- common/autotest_common.sh@953 -- # uname 00:18:33.993 06:05:25 ftl.ftl_restore -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:33.993 06:05:25 ftl.ftl_restore -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91216 00:18:33.993 06:05:25 ftl.ftl_restore -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:33.993 killing process with pid 91216 00:18:33.993 06:05:25 ftl.ftl_restore -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:33.993 06:05:25 ftl.ftl_restore -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91216' 00:18:33.993 06:05:25 ftl.ftl_restore -- common/autotest_common.sh@967 -- # kill 91216 00:18:33.993 06:05:25 ftl.ftl_restore -- common/autotest_common.sh@972 -- # wait 91216 00:18:37.280 06:05:28 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:41.469 262144+0 records in 00:18:41.469 262144+0 records out 00:18:41.469 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.4787 s, 240 MB/s 00:18:41.469 06:05:32 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:43.365 06:05:34 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:43.365 [2024-07-13 06:05:34.903560] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:43.365 [2024-07-13 06:05:34.903749] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91419 ] 00:18:43.365 [2024-07-13 06:05:35.047899] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.365 [2024-07-13 06:05:35.089712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.622 [2024-07-13 06:05:35.178739] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:43.622 [2024-07-13 06:05:35.178847] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:43.622 [2024-07-13 06:05:35.337975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.622 [2024-07-13 06:05:35.338056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:43.622 [2024-07-13 06:05:35.338078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:43.622 [2024-07-13 06:05:35.338091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.622 [2024-07-13 06:05:35.338199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.622 [2024-07-13 06:05:35.338220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:43.622 [2024-07-13 06:05:35.338240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:18:43.622 [2024-07-13 06:05:35.338251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.622 [2024-07-13 06:05:35.338285] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:43.622 [2024-07-13 06:05:35.338618] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:43.622 [2024-07-13 06:05:35.338647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.622 [2024-07-13 06:05:35.338660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:43.623 [2024-07-13 06:05:35.338683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:18:43.623 [2024-07-13 06:05:35.338698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.623 [2024-07-13 06:05:35.339952] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:43.623 [2024-07-13 06:05:35.342165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.623 [2024-07-13 06:05:35.342206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:43.623 [2024-07-13 06:05:35.342231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.215 ms 00:18:43.623 [2024-07-13 06:05:35.342253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.623 [2024-07-13 06:05:35.342327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.623 [2024-07-13 06:05:35.342348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:43.623 [2024-07-13 06:05:35.342361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:43.623 [2024-07-13 06:05:35.342372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.623 [2024-07-13 06:05:35.346832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.623 [2024-07-13 06:05:35.346877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:43.623 [2024-07-13 06:05:35.346893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.364 ms 00:18:43.623 [2024-07-13 06:05:35.346905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.623 [2024-07-13 06:05:35.347030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.623 [2024-07-13 06:05:35.347065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:43.623 [2024-07-13 06:05:35.347081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:18:43.623 [2024-07-13 06:05:35.347093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.623 [2024-07-13 06:05:35.347235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.623 [2024-07-13 06:05:35.347263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:43.623 [2024-07-13 06:05:35.347284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:43.623 [2024-07-13 06:05:35.347297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.623 [2024-07-13 06:05:35.347336] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:43.623 [2024-07-13 06:05:35.348687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.623 [2024-07-13 06:05:35.348718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:43.623 [2024-07-13 06:05:35.348732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.360 ms 00:18:43.623 [2024-07-13 06:05:35.348744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.623 [2024-07-13 06:05:35.348803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.623 [2024-07-13 06:05:35.348824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:43.623 [2024-07-13 06:05:35.348837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:43.623 [2024-07-13 06:05:35.348857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.623 [2024-07-13 06:05:35.348884] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:43.883 [2024-07-13 06:05:35.348913] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:43.883 [2024-07-13 06:05:35.348963] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:43.883 [2024-07-13 06:05:35.348993] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:43.883 [2024-07-13 06:05:35.349107] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:43.883 [2024-07-13 06:05:35.349152] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:43.883 [2024-07-13 06:05:35.349179] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:43.883 [2024-07-13 06:05:35.349197] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:43.883 [2024-07-13 06:05:35.349211] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:43.883 [2024-07-13 06:05:35.349230] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:43.883 [2024-07-13 06:05:35.349241] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:43.883 [2024-07-13 06:05:35.349252] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:43.883 [2024-07-13 06:05:35.349263] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:43.883 [2024-07-13 06:05:35.349275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.883 [2024-07-13 06:05:35.349287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:43.883 [2024-07-13 06:05:35.349305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:18:43.883 [2024-07-13 06:05:35.349316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.883 [2024-07-13 06:05:35.349405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.883 [2024-07-13 06:05:35.349435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:43.883 [2024-07-13 06:05:35.349453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:43.883 [2024-07-13 06:05:35.349465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.883 [2024-07-13 06:05:35.349580] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:43.883 [2024-07-13 06:05:35.349600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:43.883 [2024-07-13 06:05:35.349613] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:43.883 [2024-07-13 06:05:35.349630] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.883 [2024-07-13 06:05:35.349642] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:43.883 [2024-07-13 06:05:35.349652] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:43.883 [2024-07-13 06:05:35.349663] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:43.883 [2024-07-13 06:05:35.349674] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:43.883 [2024-07-13 06:05:35.349684] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:43.883 [2024-07-13 06:05:35.349695] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:43.883 [2024-07-13 06:05:35.349705] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:43.883 [2024-07-13 06:05:35.349715] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:43.883 [2024-07-13 06:05:35.349725] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:43.883 [2024-07-13 06:05:35.349747] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:43.883 [2024-07-13 06:05:35.349760] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:43.883 [2024-07-13 06:05:35.349772] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.883 [2024-07-13 06:05:35.349782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:43.883 [2024-07-13 06:05:35.349793] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:43.883 [2024-07-13 06:05:35.349804] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.883 [2024-07-13 06:05:35.349816] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:43.883 [2024-07-13 06:05:35.349826] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:43.883 [2024-07-13 06:05:35.349837] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.883 [2024-07-13 06:05:35.349847] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:43.883 [2024-07-13 06:05:35.349857] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:43.883 [2024-07-13 06:05:35.349868] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.883 [2024-07-13 06:05:35.349878] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:43.883 [2024-07-13 06:05:35.349888] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:43.883 [2024-07-13 06:05:35.349898] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.883 [2024-07-13 06:05:35.349908] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:43.883 [2024-07-13 06:05:35.349919] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:43.883 [2024-07-13 06:05:35.349934] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:43.883 [2024-07-13 06:05:35.349945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:43.883 [2024-07-13 06:05:35.349955] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:43.883 [2024-07-13 06:05:35.349965] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:43.883 [2024-07-13 06:05:35.349976] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:43.883 [2024-07-13 06:05:35.349986] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:43.883 [2024-07-13 06:05:35.349997] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:43.883 [2024-07-13 06:05:35.350007] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:43.883 [2024-07-13 06:05:35.350018] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:43.883 [2024-07-13 06:05:35.350028] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.883 [2024-07-13 06:05:35.350038] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:43.883 [2024-07-13 06:05:35.350049] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:43.883 [2024-07-13 06:05:35.350059] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.883 [2024-07-13 06:05:35.350069] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:43.883 [2024-07-13 06:05:35.350081] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:43.883 [2024-07-13 06:05:35.350091] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:43.883 [2024-07-13 06:05:35.350105] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:43.883 [2024-07-13 06:05:35.350125] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:43.883 [2024-07-13 06:05:35.350152] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:43.883 [2024-07-13 06:05:35.350165] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:43.883 [2024-07-13 06:05:35.350176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:43.883 [2024-07-13 06:05:35.350187] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:43.883 [2024-07-13 06:05:35.350197] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:43.883 [2024-07-13 06:05:35.350209] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:43.883 [2024-07-13 06:05:35.350224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:43.883 [2024-07-13 06:05:35.350238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:43.883 [2024-07-13 06:05:35.350249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:43.883 [2024-07-13 06:05:35.350261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:43.883 [2024-07-13 06:05:35.350272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:43.884 [2024-07-13 06:05:35.350283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:43.884 [2024-07-13 06:05:35.350295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:43.884 [2024-07-13 06:05:35.350306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:43.884 [2024-07-13 06:05:35.350321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:43.884 [2024-07-13 06:05:35.350333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:43.884 [2024-07-13 06:05:35.350345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:43.884 [2024-07-13 06:05:35.350356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:43.884 [2024-07-13 06:05:35.350367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:43.884 [2024-07-13 06:05:35.350378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:43.884 [2024-07-13 06:05:35.350390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:43.884 [2024-07-13 06:05:35.350401] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:43.884 [2024-07-13 06:05:35.350414] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:43.884 [2024-07-13 06:05:35.350442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:43.884 [2024-07-13 06:05:35.350454] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:43.884 [2024-07-13 06:05:35.350465] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:43.884 [2024-07-13 06:05:35.350477] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:43.884 [2024-07-13 06:05:35.350490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.884 [2024-07-13 06:05:35.350501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:43.884 [2024-07-13 06:05:35.350517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.976 ms 00:18:43.884 [2024-07-13 06:05:35.350531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.884 [2024-07-13 06:05:35.368134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.884 [2024-07-13 06:05:35.368226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:43.884 [2024-07-13 06:05:35.368248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.542 ms 00:18:43.884 [2024-07-13 06:05:35.368260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.884 [2024-07-13 06:05:35.368383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.884 [2024-07-13 06:05:35.368405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:43.884 [2024-07-13 06:05:35.368420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:43.884 [2024-07-13 06:05:35.368435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.884 [2024-07-13 06:05:35.376563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.884 [2024-07-13 06:05:35.376627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:43.884 [2024-07-13 06:05:35.376645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.025 ms 00:18:43.884 [2024-07-13 06:05:35.376656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.884 [2024-07-13 06:05:35.376729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.884 [2024-07-13 06:05:35.376746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:43.884 [2024-07-13 06:05:35.376766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:43.884 [2024-07-13 06:05:35.376777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.884 [2024-07-13 06:05:35.377123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.884 [2024-07-13 06:05:35.377142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:43.884 [2024-07-13 06:05:35.377154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:18:43.884 [2024-07-13 06:05:35.377209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.884 [2024-07-13 06:05:35.377368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.884 [2024-07-13 06:05:35.377387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:43.884 [2024-07-13 06:05:35.377400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:18:43.884 [2024-07-13 06:05:35.377415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.884 [2024-07-13 06:05:35.382326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.884 [2024-07-13 06:05:35.382363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:43.884 [2024-07-13 06:05:35.382379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.884 ms 00:18:43.884 [2024-07-13 06:05:35.382391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.884 [2024-07-13 06:05:35.384681] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:43.884 [2024-07-13 06:05:35.384722] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:43.884 [2024-07-13 06:05:35.384746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.884 [2024-07-13 06:05:35.384758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:43.884 [2024-07-13 06:05:35.384770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.201 ms 00:18:43.884 [2024-07-13 06:05:35.384784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.884 [2024-07-13 06:05:35.400880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.884 [2024-07-13 06:05:35.400945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:43.884 [2024-07-13 06:05:35.400972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.049 ms 00:18:43.884 [2024-07-13 06:05:35.400984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.884 [2024-07-13 06:05:35.402958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.884 [2024-07-13 06:05:35.402995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:43.884 [2024-07-13 06:05:35.403011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.902 ms 00:18:43.884 [2024-07-13 06:05:35.403022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.884 [2024-07-13 06:05:35.404636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.884 [2024-07-13 06:05:35.404685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:43.884 [2024-07-13 06:05:35.404700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.572 ms 00:18:43.884 [2024-07-13 06:05:35.404711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.884 [2024-07-13 06:05:35.405088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.884 [2024-07-13 06:05:35.405115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:43.884 [2024-07-13 06:05:35.405142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:18:43.884 [2024-07-13 06:05:35.405156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.884 [2024-07-13 06:05:35.421993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.884 [2024-07-13 06:05:35.422097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:43.884 [2024-07-13 06:05:35.422117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.795 ms 00:18:43.884 [2024-07-13 06:05:35.422139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.884 [2024-07-13 06:05:35.430578] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:43.884 [2024-07-13 06:05:35.433179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.884 [2024-07-13 06:05:35.433229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:43.884 [2024-07-13 06:05:35.433246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.931 ms 00:18:43.884 [2024-07-13 06:05:35.433258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.884 [2024-07-13 06:05:35.433345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.884 [2024-07-13 06:05:35.433364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:43.885 [2024-07-13 06:05:35.433378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:43.885 [2024-07-13 06:05:35.433389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.885 [2024-07-13 06:05:35.433499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.885 [2024-07-13 06:05:35.433526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:43.885 [2024-07-13 06:05:35.433544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:18:43.885 [2024-07-13 06:05:35.433555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.885 [2024-07-13 06:05:35.433589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.885 [2024-07-13 06:05:35.433604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:43.885 [2024-07-13 06:05:35.433615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:43.885 [2024-07-13 06:05:35.433626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.885 [2024-07-13 06:05:35.433678] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:43.885 [2024-07-13 06:05:35.433702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.885 [2024-07-13 06:05:35.433732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:43.885 [2024-07-13 06:05:35.433744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:43.885 [2024-07-13 06:05:35.433759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.885 [2024-07-13 06:05:35.437290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.885 [2024-07-13 06:05:35.437339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:43.885 [2024-07-13 06:05:35.437355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.506 ms 00:18:43.885 [2024-07-13 06:05:35.437367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.885 [2024-07-13 06:05:35.437445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.885 [2024-07-13 06:05:35.437464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:43.885 [2024-07-13 06:05:35.437476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:43.885 [2024-07-13 06:05:35.437501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.885 [2024-07-13 06:05:35.438681] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 100.238 ms, result 0 00:19:25.953  Copying: 23/1024 [MB] (23 MBps) Copying: 47/1024 [MB] (24 MBps) Copying: 71/1024 [MB] (23 MBps) Copying: 95/1024 [MB] (24 MBps) Copying: 118/1024 [MB] (22 MBps) Copying: 141/1024 [MB] (23 MBps) Copying: 164/1024 [MB] (23 MBps) Copying: 188/1024 [MB] (23 MBps) Copying: 211/1024 [MB] (23 MBps) Copying: 233/1024 [MB] (21 MBps) Copying: 257/1024 [MB] (24 MBps) Copying: 281/1024 [MB] (24 MBps) Copying: 307/1024 [MB] (25 MBps) Copying: 331/1024 [MB] (24 MBps) Copying: 356/1024 [MB] (24 MBps) Copying: 381/1024 [MB] (24 MBps) Copying: 406/1024 [MB] (24 MBps) Copying: 430/1024 [MB] (24 MBps) Copying: 455/1024 [MB] (24 MBps) Copying: 480/1024 [MB] (24 MBps) Copying: 504/1024 [MB] (24 MBps) Copying: 529/1024 [MB] (24 MBps) Copying: 554/1024 [MB] (25 MBps) Copying: 578/1024 [MB] (24 MBps) Copying: 603/1024 [MB] (25 MBps) Copying: 629/1024 [MB] (25 MBps) Copying: 655/1024 [MB] (25 MBps) Copying: 680/1024 [MB] (25 MBps) Copying: 706/1024 [MB] (25 MBps) Copying: 730/1024 [MB] (24 MBps) Copying: 754/1024 [MB] (24 MBps) Copying: 779/1024 [MB] (24 MBps) Copying: 803/1024 [MB] (24 MBps) Copying: 828/1024 [MB] (25 MBps) Copying: 854/1024 [MB] (25 MBps) Copying: 877/1024 [MB] (23 MBps) Copying: 901/1024 [MB] (23 MBps) Copying: 926/1024 [MB] (24 MBps) Copying: 951/1024 [MB] (24 MBps) Copying: 975/1024 [MB] (24 MBps) Copying: 999/1024 [MB] (24 MBps) Copying: 1023/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-13 06:06:17.476827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.953 [2024-07-13 06:06:17.477007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:25.953 [2024-07-13 06:06:17.477066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:25.953 [2024-07-13 06:06:17.477079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.953 [2024-07-13 06:06:17.477118] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:25.953 [2024-07-13 06:06:17.477622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.953 [2024-07-13 06:06:17.477643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:25.953 [2024-07-13 06:06:17.477655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:19:25.954 [2024-07-13 06:06:17.477666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.954 [2024-07-13 06:06:17.479105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.954 [2024-07-13 06:06:17.479173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:25.954 [2024-07-13 06:06:17.479218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.416 ms 00:19:25.954 [2024-07-13 06:06:17.479237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.954 [2024-07-13 06:06:17.496230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.954 [2024-07-13 06:06:17.496294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:25.954 [2024-07-13 06:06:17.496313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.961 ms 00:19:25.954 [2024-07-13 06:06:17.496325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.954 [2024-07-13 06:06:17.502802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.954 [2024-07-13 06:06:17.502853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:25.954 [2024-07-13 06:06:17.502882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.435 ms 00:19:25.954 [2024-07-13 06:06:17.502900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.954 [2024-07-13 06:06:17.504414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.954 [2024-07-13 06:06:17.504452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:25.954 [2024-07-13 06:06:17.504467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.450 ms 00:19:25.954 [2024-07-13 06:06:17.504477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.954 [2024-07-13 06:06:17.507395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.954 [2024-07-13 06:06:17.507495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:25.954 [2024-07-13 06:06:17.507511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.882 ms 00:19:25.954 [2024-07-13 06:06:17.507522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.954 [2024-07-13 06:06:17.507668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.954 [2024-07-13 06:06:17.507687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:25.954 [2024-07-13 06:06:17.507701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:19:25.954 [2024-07-13 06:06:17.507716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.954 [2024-07-13 06:06:17.509516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.954 [2024-07-13 06:06:17.509554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:25.954 [2024-07-13 06:06:17.509582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.779 ms 00:19:25.954 [2024-07-13 06:06:17.509593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.954 [2024-07-13 06:06:17.511273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.954 [2024-07-13 06:06:17.511337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:25.954 [2024-07-13 06:06:17.511350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.644 ms 00:19:25.954 [2024-07-13 06:06:17.511360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.954 [2024-07-13 06:06:17.512669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.954 [2024-07-13 06:06:17.512736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:25.954 [2024-07-13 06:06:17.512750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.275 ms 00:19:25.954 [2024-07-13 06:06:17.512760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.954 [2024-07-13 06:06:17.513888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.954 [2024-07-13 06:06:17.513954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:25.954 [2024-07-13 06:06:17.513968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.070 ms 00:19:25.954 [2024-07-13 06:06:17.513977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.954 [2024-07-13 06:06:17.514009] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:25.954 [2024-07-13 06:06:17.514030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:25.954 [2024-07-13 06:06:17.514760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.514770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.514781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.514792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.514803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.514814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.514825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.514836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.514861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.514872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.514883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.514893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.514904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.514915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.514925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.514936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.514947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.514957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.514968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.514978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.514989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.515000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.515010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.515020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.515031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.515041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.515053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.515063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.515074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.515084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.515095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.515106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.515117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.515127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.515138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.515148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.515159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.515170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.515192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.515204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:25.955 [2024-07-13 06:06:17.515223] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:25.955 [2024-07-13 06:06:17.515244] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b0bf905e-23fc-4217-a19b-6c8f1c14c1b0 00:19:25.955 [2024-07-13 06:06:17.515255] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:25.955 [2024-07-13 06:06:17.515265] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:25.955 [2024-07-13 06:06:17.515274] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:25.955 [2024-07-13 06:06:17.515285] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:25.955 [2024-07-13 06:06:17.515295] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:25.955 [2024-07-13 06:06:17.515313] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:25.955 [2024-07-13 06:06:17.515327] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:25.955 [2024-07-13 06:06:17.515336] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:25.955 [2024-07-13 06:06:17.515345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:25.955 [2024-07-13 06:06:17.515356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.955 [2024-07-13 06:06:17.515366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:25.955 [2024-07-13 06:06:17.515377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.348 ms 00:19:25.955 [2024-07-13 06:06:17.515395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.955 [2024-07-13 06:06:17.516758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.955 [2024-07-13 06:06:17.516784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:25.955 [2024-07-13 06:06:17.516796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.335 ms 00:19:25.955 [2024-07-13 06:06:17.516807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.955 [2024-07-13 06:06:17.516884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.955 [2024-07-13 06:06:17.516898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:25.955 [2024-07-13 06:06:17.516909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:25.955 [2024-07-13 06:06:17.516919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.955 [2024-07-13 06:06:17.521379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.955 [2024-07-13 06:06:17.521418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:25.955 [2024-07-13 06:06:17.521432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.955 [2024-07-13 06:06:17.521454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.955 [2024-07-13 06:06:17.521518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.955 [2024-07-13 06:06:17.521532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:25.955 [2024-07-13 06:06:17.521544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.955 [2024-07-13 06:06:17.521556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.955 [2024-07-13 06:06:17.521606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.955 [2024-07-13 06:06:17.521637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:25.955 [2024-07-13 06:06:17.521649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.955 [2024-07-13 06:06:17.521659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.955 [2024-07-13 06:06:17.521684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.955 [2024-07-13 06:06:17.521698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:25.955 [2024-07-13 06:06:17.521709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.955 [2024-07-13 06:06:17.521729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.955 [2024-07-13 06:06:17.529346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.955 [2024-07-13 06:06:17.529417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:25.955 [2024-07-13 06:06:17.529434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.955 [2024-07-13 06:06:17.529460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.955 [2024-07-13 06:06:17.535704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.955 [2024-07-13 06:06:17.535769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:25.955 [2024-07-13 06:06:17.535785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.955 [2024-07-13 06:06:17.535795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.955 [2024-07-13 06:06:17.535863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.955 [2024-07-13 06:06:17.535891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:25.955 [2024-07-13 06:06:17.535903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.955 [2024-07-13 06:06:17.535912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.955 [2024-07-13 06:06:17.535947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.955 [2024-07-13 06:06:17.535964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:25.955 [2024-07-13 06:06:17.535975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.955 [2024-07-13 06:06:17.535985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.955 [2024-07-13 06:06:17.536262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.955 [2024-07-13 06:06:17.536281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:25.955 [2024-07-13 06:06:17.536292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.955 [2024-07-13 06:06:17.536302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.955 [2024-07-13 06:06:17.536345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.955 [2024-07-13 06:06:17.536360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:25.955 [2024-07-13 06:06:17.536377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.955 [2024-07-13 06:06:17.536387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.955 [2024-07-13 06:06:17.536437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.955 [2024-07-13 06:06:17.536454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:25.955 [2024-07-13 06:06:17.536464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.955 [2024-07-13 06:06:17.536474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.955 [2024-07-13 06:06:17.536521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:25.955 [2024-07-13 06:06:17.536541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:25.955 [2024-07-13 06:06:17.536552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:25.955 [2024-07-13 06:06:17.536562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.956 [2024-07-13 06:06:17.536685] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 59.829 ms, result 0 00:19:26.521 00:19:26.521 00:19:26.521 06:06:18 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:26.521 [2024-07-13 06:06:18.141774] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:19:26.521 [2024-07-13 06:06:18.141967] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91854 ] 00:19:26.779 [2024-07-13 06:06:18.289749] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.779 [2024-07-13 06:06:18.323636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.779 [2024-07-13 06:06:18.404944] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:26.780 [2024-07-13 06:06:18.405057] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:27.038 [2024-07-13 06:06:18.561355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.038 [2024-07-13 06:06:18.561434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:27.038 [2024-07-13 06:06:18.561472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:27.038 [2024-07-13 06:06:18.561485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.038 [2024-07-13 06:06:18.561572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.038 [2024-07-13 06:06:18.561607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:27.038 [2024-07-13 06:06:18.561631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:27.038 [2024-07-13 06:06:18.561641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.038 [2024-07-13 06:06:18.561710] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:27.038 [2024-07-13 06:06:18.562000] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:27.038 [2024-07-13 06:06:18.562041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.038 [2024-07-13 06:06:18.562069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:27.038 [2024-07-13 06:06:18.562082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:19:27.038 [2024-07-13 06:06:18.562097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.038 [2024-07-13 06:06:18.563309] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:27.038 [2024-07-13 06:06:18.565571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.038 [2024-07-13 06:06:18.565641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:27.038 [2024-07-13 06:06:18.565690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.263 ms 00:19:27.038 [2024-07-13 06:06:18.565702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.038 [2024-07-13 06:06:18.565779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.038 [2024-07-13 06:06:18.565799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:27.038 [2024-07-13 06:06:18.565813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:27.038 [2024-07-13 06:06:18.565824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.038 [2024-07-13 06:06:18.570201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.038 [2024-07-13 06:06:18.570272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:27.038 [2024-07-13 06:06:18.570305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.303 ms 00:19:27.038 [2024-07-13 06:06:18.570316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.038 [2024-07-13 06:06:18.570438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.038 [2024-07-13 06:06:18.570457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:27.038 [2024-07-13 06:06:18.570469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:19:27.038 [2024-07-13 06:06:18.570489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.038 [2024-07-13 06:06:18.570610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.038 [2024-07-13 06:06:18.570636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:27.038 [2024-07-13 06:06:18.570657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:27.038 [2024-07-13 06:06:18.570670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.038 [2024-07-13 06:06:18.570705] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:27.038 [2024-07-13 06:06:18.572030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.038 [2024-07-13 06:06:18.572081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:27.038 [2024-07-13 06:06:18.572114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.334 ms 00:19:27.038 [2024-07-13 06:06:18.572138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.038 [2024-07-13 06:06:18.572210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.038 [2024-07-13 06:06:18.572240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:27.038 [2024-07-13 06:06:18.572253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:27.038 [2024-07-13 06:06:18.572265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.038 [2024-07-13 06:06:18.572301] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:27.038 [2024-07-13 06:06:18.572331] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:27.038 [2024-07-13 06:06:18.572381] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:27.038 [2024-07-13 06:06:18.572412] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:27.038 [2024-07-13 06:06:18.572528] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:27.038 [2024-07-13 06:06:18.572557] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:27.038 [2024-07-13 06:06:18.572574] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:27.038 [2024-07-13 06:06:18.572589] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:27.038 [2024-07-13 06:06:18.572602] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:27.038 [2024-07-13 06:06:18.572615] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:27.038 [2024-07-13 06:06:18.572626] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:27.038 [2024-07-13 06:06:18.572636] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:27.038 [2024-07-13 06:06:18.572658] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:27.038 [2024-07-13 06:06:18.572670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.038 [2024-07-13 06:06:18.572690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:27.038 [2024-07-13 06:06:18.572706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:19:27.038 [2024-07-13 06:06:18.572718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.038 [2024-07-13 06:06:18.572813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.038 [2024-07-13 06:06:18.572838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:27.038 [2024-07-13 06:06:18.572858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:27.038 [2024-07-13 06:06:18.572870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.038 [2024-07-13 06:06:18.572982] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:27.039 [2024-07-13 06:06:18.573000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:27.039 [2024-07-13 06:06:18.573013] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:27.039 [2024-07-13 06:06:18.573030] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.039 [2024-07-13 06:06:18.573041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:27.039 [2024-07-13 06:06:18.573051] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:27.039 [2024-07-13 06:06:18.573062] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:27.039 [2024-07-13 06:06:18.573073] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:27.039 [2024-07-13 06:06:18.573084] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:27.039 [2024-07-13 06:06:18.573094] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:27.039 [2024-07-13 06:06:18.573104] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:27.039 [2024-07-13 06:06:18.573115] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:27.039 [2024-07-13 06:06:18.573125] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:27.039 [2024-07-13 06:06:18.573158] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:27.039 [2024-07-13 06:06:18.573185] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:27.039 [2024-07-13 06:06:18.573198] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.039 [2024-07-13 06:06:18.573209] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:27.039 [2024-07-13 06:06:18.573219] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:27.039 [2024-07-13 06:06:18.573231] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.039 [2024-07-13 06:06:18.573242] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:27.039 [2024-07-13 06:06:18.573256] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:27.039 [2024-07-13 06:06:18.573266] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:27.039 [2024-07-13 06:06:18.573276] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:27.039 [2024-07-13 06:06:18.573286] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:27.039 [2024-07-13 06:06:18.573296] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:27.039 [2024-07-13 06:06:18.573306] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:27.039 [2024-07-13 06:06:18.573316] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:27.039 [2024-07-13 06:06:18.573326] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:27.039 [2024-07-13 06:06:18.573336] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:27.039 [2024-07-13 06:06:18.573346] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:27.039 [2024-07-13 06:06:18.573361] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:27.039 [2024-07-13 06:06:18.573373] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:27.039 [2024-07-13 06:06:18.573384] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:27.039 [2024-07-13 06:06:18.573394] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:27.039 [2024-07-13 06:06:18.573404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:27.039 [2024-07-13 06:06:18.573415] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:27.039 [2024-07-13 06:06:18.573425] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:27.039 [2024-07-13 06:06:18.573435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:27.039 [2024-07-13 06:06:18.573445] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:27.039 [2024-07-13 06:06:18.573456] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.039 [2024-07-13 06:06:18.573466] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:27.039 [2024-07-13 06:06:18.573476] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:27.039 [2024-07-13 06:06:18.573486] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.039 [2024-07-13 06:06:18.573495] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:27.039 [2024-07-13 06:06:18.573507] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:27.039 [2024-07-13 06:06:18.573518] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:27.039 [2024-07-13 06:06:18.573543] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.039 [2024-07-13 06:06:18.573555] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:27.039 [2024-07-13 06:06:18.573566] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:27.039 [2024-07-13 06:06:18.573576] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:27.039 [2024-07-13 06:06:18.573589] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:27.039 [2024-07-13 06:06:18.573599] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:27.039 [2024-07-13 06:06:18.573609] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:27.039 [2024-07-13 06:06:18.573621] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:27.039 [2024-07-13 06:06:18.573635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:27.039 [2024-07-13 06:06:18.573648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:27.039 [2024-07-13 06:06:18.573660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:27.039 [2024-07-13 06:06:18.573671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:27.039 [2024-07-13 06:06:18.573682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:27.039 [2024-07-13 06:06:18.573693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:27.039 [2024-07-13 06:06:18.573704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:27.039 [2024-07-13 06:06:18.573715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:27.039 [2024-07-13 06:06:18.573729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:27.039 [2024-07-13 06:06:18.573741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:27.039 [2024-07-13 06:06:18.573753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:27.039 [2024-07-13 06:06:18.573764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:27.039 [2024-07-13 06:06:18.573775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:27.039 [2024-07-13 06:06:18.573787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:27.039 [2024-07-13 06:06:18.573798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:27.039 [2024-07-13 06:06:18.573810] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:27.039 [2024-07-13 06:06:18.573822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:27.039 [2024-07-13 06:06:18.573846] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:27.039 [2024-07-13 06:06:18.573858] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:27.039 [2024-07-13 06:06:18.573869] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:27.039 [2024-07-13 06:06:18.573881] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:27.039 [2024-07-13 06:06:18.573893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.039 [2024-07-13 06:06:18.573905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:27.039 [2024-07-13 06:06:18.573921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:19:27.039 [2024-07-13 06:06:18.573934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.039 [2024-07-13 06:06:18.589924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.039 [2024-07-13 06:06:18.589984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:27.039 [2024-07-13 06:06:18.590021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.917 ms 00:19:27.039 [2024-07-13 06:06:18.590033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.039 [2024-07-13 06:06:18.590169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.039 [2024-07-13 06:06:18.590224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:27.039 [2024-07-13 06:06:18.590238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:19:27.039 [2024-07-13 06:06:18.590253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.039 [2024-07-13 06:06:18.597867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.039 [2024-07-13 06:06:18.597930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:27.039 [2024-07-13 06:06:18.597963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.519 ms 00:19:27.039 [2024-07-13 06:06:18.597975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.039 [2024-07-13 06:06:18.598032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.039 [2024-07-13 06:06:18.598047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:27.039 [2024-07-13 06:06:18.598078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:27.039 [2024-07-13 06:06:18.598088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.039 [2024-07-13 06:06:18.598485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.039 [2024-07-13 06:06:18.598516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:27.039 [2024-07-13 06:06:18.598530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:19:27.039 [2024-07-13 06:06:18.598541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.039 [2024-07-13 06:06:18.598705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.039 [2024-07-13 06:06:18.598739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:27.039 [2024-07-13 06:06:18.598753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:19:27.039 [2024-07-13 06:06:18.598770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.039 [2024-07-13 06:06:18.603451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.039 [2024-07-13 06:06:18.603516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:27.039 [2024-07-13 06:06:18.603549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.652 ms 00:19:27.039 [2024-07-13 06:06:18.603562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.039 [2024-07-13 06:06:18.605977] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:27.040 [2024-07-13 06:06:18.606103] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:27.040 [2024-07-13 06:06:18.606137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.040 [2024-07-13 06:06:18.606176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:27.040 [2024-07-13 06:06:18.606190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.429 ms 00:19:27.040 [2024-07-13 06:06:18.606205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.040 [2024-07-13 06:06:18.621028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.040 [2024-07-13 06:06:18.621089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:27.040 [2024-07-13 06:06:18.621108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.704 ms 00:19:27.040 [2024-07-13 06:06:18.621120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.040 [2024-07-13 06:06:18.623002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.040 [2024-07-13 06:06:18.623054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:27.040 [2024-07-13 06:06:18.623101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.810 ms 00:19:27.040 [2024-07-13 06:06:18.623112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.040 [2024-07-13 06:06:18.624751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.040 [2024-07-13 06:06:18.624801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:27.040 [2024-07-13 06:06:18.624832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.568 ms 00:19:27.040 [2024-07-13 06:06:18.624846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.040 [2024-07-13 06:06:18.625273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.040 [2024-07-13 06:06:18.625304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:27.040 [2024-07-13 06:06:18.625318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:19:27.040 [2024-07-13 06:06:18.625345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.040 [2024-07-13 06:06:18.641802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.040 [2024-07-13 06:06:18.641888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:27.040 [2024-07-13 06:06:18.641926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.431 ms 00:19:27.040 [2024-07-13 06:06:18.641938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.040 [2024-07-13 06:06:18.649757] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:27.040 [2024-07-13 06:06:18.652108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.040 [2024-07-13 06:06:18.652168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:27.040 [2024-07-13 06:06:18.652202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.106 ms 00:19:27.040 [2024-07-13 06:06:18.652214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.040 [2024-07-13 06:06:18.652288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.040 [2024-07-13 06:06:18.652316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:27.040 [2024-07-13 06:06:18.652329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:27.040 [2024-07-13 06:06:18.652339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.040 [2024-07-13 06:06:18.652484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.040 [2024-07-13 06:06:18.652515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:27.040 [2024-07-13 06:06:18.652530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:27.040 [2024-07-13 06:06:18.652541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.040 [2024-07-13 06:06:18.652575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.040 [2024-07-13 06:06:18.652591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:27.040 [2024-07-13 06:06:18.652602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:27.040 [2024-07-13 06:06:18.652626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.040 [2024-07-13 06:06:18.652668] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:27.040 [2024-07-13 06:06:18.652698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.040 [2024-07-13 06:06:18.652714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:27.040 [2024-07-13 06:06:18.652725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:27.040 [2024-07-13 06:06:18.652736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.040 [2024-07-13 06:06:18.656069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.040 [2024-07-13 06:06:18.656128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:27.040 [2024-07-13 06:06:18.656161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.308 ms 00:19:27.040 [2024-07-13 06:06:18.656173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.040 [2024-07-13 06:06:18.656267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.040 [2024-07-13 06:06:18.656286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:27.040 [2024-07-13 06:06:18.656306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:27.040 [2024-07-13 06:06:18.656318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.040 [2024-07-13 06:06:18.657523] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 95.645 ms, result 0 00:20:11.605  Copying: 25/1024 [MB] (25 MBps) Copying: 49/1024 [MB] (24 MBps) Copying: 74/1024 [MB] (24 MBps) Copying: 99/1024 [MB] (25 MBps) Copying: 123/1024 [MB] (23 MBps) Copying: 147/1024 [MB] (24 MBps) Copying: 170/1024 [MB] (23 MBps) Copying: 194/1024 [MB] (23 MBps) Copying: 217/1024 [MB] (22 MBps) Copying: 240/1024 [MB] (23 MBps) Copying: 263/1024 [MB] (23 MBps) Copying: 287/1024 [MB] (23 MBps) Copying: 310/1024 [MB] (23 MBps) Copying: 333/1024 [MB] (22 MBps) Copying: 356/1024 [MB] (23 MBps) Copying: 380/1024 [MB] (23 MBps) Copying: 403/1024 [MB] (23 MBps) Copying: 425/1024 [MB] (22 MBps) Copying: 449/1024 [MB] (24 MBps) Copying: 472/1024 [MB] (22 MBps) Copying: 495/1024 [MB] (23 MBps) Copying: 519/1024 [MB] (23 MBps) Copying: 542/1024 [MB] (23 MBps) Copying: 566/1024 [MB] (23 MBps) Copying: 590/1024 [MB] (23 MBps) Copying: 613/1024 [MB] (22 MBps) Copying: 636/1024 [MB] (23 MBps) Copying: 659/1024 [MB] (23 MBps) Copying: 683/1024 [MB] (23 MBps) Copying: 706/1024 [MB] (23 MBps) Copying: 730/1024 [MB] (23 MBps) Copying: 753/1024 [MB] (23 MBps) Copying: 776/1024 [MB] (23 MBps) Copying: 799/1024 [MB] (22 MBps) Copying: 822/1024 [MB] (22 MBps) Copying: 845/1024 [MB] (23 MBps) Copying: 867/1024 [MB] (22 MBps) Copying: 890/1024 [MB] (22 MBps) Copying: 913/1024 [MB] (22 MBps) Copying: 936/1024 [MB] (23 MBps) Copying: 958/1024 [MB] (22 MBps) Copying: 981/1024 [MB] (22 MBps) Copying: 1003/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-13 06:07:03.152024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.605 [2024-07-13 06:07:03.152112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:11.605 [2024-07-13 06:07:03.152162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:11.605 [2024-07-13 06:07:03.152178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.605 [2024-07-13 06:07:03.152225] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:11.605 [2024-07-13 06:07:03.152718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.605 [2024-07-13 06:07:03.152747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:11.605 [2024-07-13 06:07:03.152762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:20:11.605 [2024-07-13 06:07:03.152774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.605 [2024-07-13 06:07:03.153013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.605 [2024-07-13 06:07:03.153038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:11.605 [2024-07-13 06:07:03.153060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:20:11.605 [2024-07-13 06:07:03.153071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.605 [2024-07-13 06:07:03.157788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.605 [2024-07-13 06:07:03.158051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:11.605 [2024-07-13 06:07:03.158069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.694 ms 00:20:11.605 [2024-07-13 06:07:03.158081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.605 [2024-07-13 06:07:03.165534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.605 [2024-07-13 06:07:03.165620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:11.605 [2024-07-13 06:07:03.165636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.424 ms 00:20:11.605 [2024-07-13 06:07:03.165654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.605 [2024-07-13 06:07:03.167220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.605 [2024-07-13 06:07:03.167275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:11.605 [2024-07-13 06:07:03.167293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.500 ms 00:20:11.605 [2024-07-13 06:07:03.167304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.605 [2024-07-13 06:07:03.170337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.605 [2024-07-13 06:07:03.170393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:11.605 [2024-07-13 06:07:03.170422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.009 ms 00:20:11.605 [2024-07-13 06:07:03.170433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.605 [2024-07-13 06:07:03.170571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.605 [2024-07-13 06:07:03.170591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:11.606 [2024-07-13 06:07:03.170607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:20:11.606 [2024-07-13 06:07:03.170633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.606 [2024-07-13 06:07:03.172563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.606 [2024-07-13 06:07:03.172631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:11.606 [2024-07-13 06:07:03.172647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.894 ms 00:20:11.606 [2024-07-13 06:07:03.172658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.606 [2024-07-13 06:07:03.174687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.606 [2024-07-13 06:07:03.174738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:11.606 [2024-07-13 06:07:03.174753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.008 ms 00:20:11.606 [2024-07-13 06:07:03.174762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.606 [2024-07-13 06:07:03.176055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.606 [2024-07-13 06:07:03.176106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:11.606 [2024-07-13 06:07:03.176120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.258 ms 00:20:11.606 [2024-07-13 06:07:03.176148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.606 [2024-07-13 06:07:03.177357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.606 [2024-07-13 06:07:03.177396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:11.606 [2024-07-13 06:07:03.177412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.159 ms 00:20:11.606 [2024-07-13 06:07:03.177423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.606 [2024-07-13 06:07:03.177446] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:11.606 [2024-07-13 06:07:03.177513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.177999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.178009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.178020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.178030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.178041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.178051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.178062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.178072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.178083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.178093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.178104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.178115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.178125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.178151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.178162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.178173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.178200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.178211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.178222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.178245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.178258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.178269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:11.606 [2024-07-13 06:07:03.178280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:11.607 [2024-07-13 06:07:03.178750] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:11.607 [2024-07-13 06:07:03.178760] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b0bf905e-23fc-4217-a19b-6c8f1c14c1b0 00:20:11.607 [2024-07-13 06:07:03.178771] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:11.607 [2024-07-13 06:07:03.178781] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:11.607 [2024-07-13 06:07:03.178791] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:11.607 [2024-07-13 06:07:03.178802] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:11.607 [2024-07-13 06:07:03.178811] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:11.607 [2024-07-13 06:07:03.178827] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:11.607 [2024-07-13 06:07:03.178838] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:11.607 [2024-07-13 06:07:03.178847] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:11.607 [2024-07-13 06:07:03.178856] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:11.607 [2024-07-13 06:07:03.178868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.607 [2024-07-13 06:07:03.178878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:11.607 [2024-07-13 06:07:03.178899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.423 ms 00:20:11.607 [2024-07-13 06:07:03.178910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.607 [2024-07-13 06:07:03.180128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.607 [2024-07-13 06:07:03.180189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:11.607 [2024-07-13 06:07:03.180204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.185 ms 00:20:11.607 [2024-07-13 06:07:03.180219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.607 [2024-07-13 06:07:03.180300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.607 [2024-07-13 06:07:03.180315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:11.607 [2024-07-13 06:07:03.180354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:11.607 [2024-07-13 06:07:03.180364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.607 [2024-07-13 06:07:03.184437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.607 [2024-07-13 06:07:03.184469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:11.607 [2024-07-13 06:07:03.184483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.607 [2024-07-13 06:07:03.184497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.607 [2024-07-13 06:07:03.184549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.607 [2024-07-13 06:07:03.184564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:11.607 [2024-07-13 06:07:03.184574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.607 [2024-07-13 06:07:03.184584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.607 [2024-07-13 06:07:03.184630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.607 [2024-07-13 06:07:03.184646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:11.607 [2024-07-13 06:07:03.184699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.608 [2024-07-13 06:07:03.184725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.608 [2024-07-13 06:07:03.184750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.608 [2024-07-13 06:07:03.184763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:11.608 [2024-07-13 06:07:03.184783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.608 [2024-07-13 06:07:03.184793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.608 [2024-07-13 06:07:03.192260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.608 [2024-07-13 06:07:03.192325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:11.608 [2024-07-13 06:07:03.192340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.608 [2024-07-13 06:07:03.192358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.608 [2024-07-13 06:07:03.199005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.608 [2024-07-13 06:07:03.199068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:11.608 [2024-07-13 06:07:03.199085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.608 [2024-07-13 06:07:03.199094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.608 [2024-07-13 06:07:03.199167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.608 [2024-07-13 06:07:03.199184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:11.608 [2024-07-13 06:07:03.199195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.608 [2024-07-13 06:07:03.199204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.608 [2024-07-13 06:07:03.199237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.608 [2024-07-13 06:07:03.199283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:11.608 [2024-07-13 06:07:03.199310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.608 [2024-07-13 06:07:03.199320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.608 [2024-07-13 06:07:03.199410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.608 [2024-07-13 06:07:03.199428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:11.608 [2024-07-13 06:07:03.199441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.608 [2024-07-13 06:07:03.199452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.608 [2024-07-13 06:07:03.199497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.608 [2024-07-13 06:07:03.199525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:11.608 [2024-07-13 06:07:03.199537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.608 [2024-07-13 06:07:03.199547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.608 [2024-07-13 06:07:03.199590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.608 [2024-07-13 06:07:03.199605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:11.608 [2024-07-13 06:07:03.199616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.608 [2024-07-13 06:07:03.199626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.608 [2024-07-13 06:07:03.199678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:11.608 [2024-07-13 06:07:03.199704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:11.608 [2024-07-13 06:07:03.199716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:11.608 [2024-07-13 06:07:03.199726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.608 [2024-07-13 06:07:03.199868] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 47.803 ms, result 0 00:20:11.866 00:20:11.866 00:20:11.866 06:07:03 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:13.799 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:13.799 06:07:05 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:13.799 [2024-07-13 06:07:05.468047] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:20:13.799 [2024-07-13 06:07:05.468247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92330 ] 00:20:14.056 [2024-07-13 06:07:05.616591] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.056 [2024-07-13 06:07:05.658552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.056 [2024-07-13 06:07:05.750221] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:14.056 [2024-07-13 06:07:05.750337] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:14.316 [2024-07-13 06:07:05.908284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.316 [2024-07-13 06:07:05.908343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:14.316 [2024-07-13 06:07:05.908370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:14.316 [2024-07-13 06:07:05.908388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.316 [2024-07-13 06:07:05.908481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.316 [2024-07-13 06:07:05.908536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:14.316 [2024-07-13 06:07:05.908586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:14.316 [2024-07-13 06:07:05.908605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.316 [2024-07-13 06:07:05.908669] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:14.316 [2024-07-13 06:07:05.909082] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:14.316 [2024-07-13 06:07:05.909150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.316 [2024-07-13 06:07:05.909176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:14.316 [2024-07-13 06:07:05.909251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:20:14.316 [2024-07-13 06:07:05.909284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.316 [2024-07-13 06:07:05.910525] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:14.316 [2024-07-13 06:07:05.912565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.316 [2024-07-13 06:07:05.912606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:14.316 [2024-07-13 06:07:05.912649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.042 ms 00:20:14.316 [2024-07-13 06:07:05.912685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.316 [2024-07-13 06:07:05.912820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.316 [2024-07-13 06:07:05.912851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:14.316 [2024-07-13 06:07:05.912888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:14.316 [2024-07-13 06:07:05.912936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.316 [2024-07-13 06:07:05.917370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.316 [2024-07-13 06:07:05.917413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:14.316 [2024-07-13 06:07:05.917438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.308 ms 00:20:14.316 [2024-07-13 06:07:05.917458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.316 [2024-07-13 06:07:05.917636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.316 [2024-07-13 06:07:05.917665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:14.316 [2024-07-13 06:07:05.917686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:20:14.316 [2024-07-13 06:07:05.917704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.316 [2024-07-13 06:07:05.917821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.316 [2024-07-13 06:07:05.917860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:14.316 [2024-07-13 06:07:05.917896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:14.316 [2024-07-13 06:07:05.917916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.316 [2024-07-13 06:07:05.917973] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:14.316 [2024-07-13 06:07:05.919377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.316 [2024-07-13 06:07:05.919413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:14.316 [2024-07-13 06:07:05.919437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.423 ms 00:20:14.316 [2024-07-13 06:07:05.919456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.316 [2024-07-13 06:07:05.919536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.316 [2024-07-13 06:07:05.919567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:14.316 [2024-07-13 06:07:05.919588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:14.316 [2024-07-13 06:07:05.919624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.316 [2024-07-13 06:07:05.919667] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:14.316 [2024-07-13 06:07:05.919721] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:14.316 [2024-07-13 06:07:05.919809] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:14.316 [2024-07-13 06:07:05.919854] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:14.316 [2024-07-13 06:07:05.920003] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:14.316 [2024-07-13 06:07:05.920052] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:14.316 [2024-07-13 06:07:05.920078] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:14.316 [2024-07-13 06:07:05.920101] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:14.316 [2024-07-13 06:07:05.920192] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:14.316 [2024-07-13 06:07:05.920216] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:14.316 [2024-07-13 06:07:05.920234] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:14.316 [2024-07-13 06:07:05.920251] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:14.316 [2024-07-13 06:07:05.920269] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:14.316 [2024-07-13 06:07:05.920289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.316 [2024-07-13 06:07:05.920328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:14.316 [2024-07-13 06:07:05.920358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:20:14.316 [2024-07-13 06:07:05.920378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.316 [2024-07-13 06:07:05.920512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.316 [2024-07-13 06:07:05.920556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:14.316 [2024-07-13 06:07:05.920603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:20:14.316 [2024-07-13 06:07:05.920630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.316 [2024-07-13 06:07:05.920780] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:14.316 [2024-07-13 06:07:05.920834] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:14.316 [2024-07-13 06:07:05.920856] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:14.316 [2024-07-13 06:07:05.920896] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.316 [2024-07-13 06:07:05.920915] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:14.316 [2024-07-13 06:07:05.920933] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:14.316 [2024-07-13 06:07:05.920950] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:14.316 [2024-07-13 06:07:05.920967] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:14.316 [2024-07-13 06:07:05.920983] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:14.316 [2024-07-13 06:07:05.920999] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:14.316 [2024-07-13 06:07:05.921016] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:14.317 [2024-07-13 06:07:05.921032] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:14.317 [2024-07-13 06:07:05.921048] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:14.317 [2024-07-13 06:07:05.921065] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:14.317 [2024-07-13 06:07:05.921086] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:14.317 [2024-07-13 06:07:05.921105] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.317 [2024-07-13 06:07:05.921122] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:14.317 [2024-07-13 06:07:05.921210] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:14.317 [2024-07-13 06:07:05.921234] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.317 [2024-07-13 06:07:05.921252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:14.317 [2024-07-13 06:07:05.921270] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:14.317 [2024-07-13 06:07:05.921294] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.317 [2024-07-13 06:07:05.921315] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:14.317 [2024-07-13 06:07:05.921332] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:14.317 [2024-07-13 06:07:05.921348] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.317 [2024-07-13 06:07:05.921366] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:14.317 [2024-07-13 06:07:05.921384] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:14.317 [2024-07-13 06:07:05.921401] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.317 [2024-07-13 06:07:05.921419] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:14.317 [2024-07-13 06:07:05.921437] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:14.317 [2024-07-13 06:07:05.921463] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.317 [2024-07-13 06:07:05.921482] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:14.317 [2024-07-13 06:07:05.921514] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:14.317 [2024-07-13 06:07:05.921532] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:14.317 [2024-07-13 06:07:05.921549] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:14.317 [2024-07-13 06:07:05.921567] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:14.317 [2024-07-13 06:07:05.921584] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:14.317 [2024-07-13 06:07:05.921600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:14.317 [2024-07-13 06:07:05.921619] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:14.317 [2024-07-13 06:07:05.921646] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.317 [2024-07-13 06:07:05.921665] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:14.317 [2024-07-13 06:07:05.921683] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:14.317 [2024-07-13 06:07:05.921700] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.317 [2024-07-13 06:07:05.921723] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:14.317 [2024-07-13 06:07:05.921756] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:14.317 [2024-07-13 06:07:05.921774] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:14.317 [2024-07-13 06:07:05.921796] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.317 [2024-07-13 06:07:05.921816] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:14.317 [2024-07-13 06:07:05.921833] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:14.317 [2024-07-13 06:07:05.921850] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:14.317 [2024-07-13 06:07:05.921867] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:14.317 [2024-07-13 06:07:05.921884] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:14.317 [2024-07-13 06:07:05.921901] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:14.317 [2024-07-13 06:07:05.921919] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:14.317 [2024-07-13 06:07:05.921942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:14.317 [2024-07-13 06:07:05.921963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:14.317 [2024-07-13 06:07:05.921982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:14.317 [2024-07-13 06:07:05.922002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:14.317 [2024-07-13 06:07:05.922020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:14.317 [2024-07-13 06:07:05.922040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:14.317 [2024-07-13 06:07:05.922057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:14.317 [2024-07-13 06:07:05.922075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:14.317 [2024-07-13 06:07:05.922098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:14.317 [2024-07-13 06:07:05.922116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:14.317 [2024-07-13 06:07:05.922135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:14.317 [2024-07-13 06:07:05.922194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:14.317 [2024-07-13 06:07:05.922216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:14.317 [2024-07-13 06:07:05.922234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:14.317 [2024-07-13 06:07:05.922253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:14.317 [2024-07-13 06:07:05.922278] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:14.317 [2024-07-13 06:07:05.922313] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:14.317 [2024-07-13 06:07:05.922348] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:14.317 [2024-07-13 06:07:05.922369] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:14.317 [2024-07-13 06:07:05.922389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:14.317 [2024-07-13 06:07:05.922408] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:14.317 [2024-07-13 06:07:05.922435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.317 [2024-07-13 06:07:05.922455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:14.317 [2024-07-13 06:07:05.922488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.742 ms 00:20:14.317 [2024-07-13 06:07:05.922514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.317 [2024-07-13 06:07:05.939807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.317 [2024-07-13 06:07:05.939858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:14.317 [2024-07-13 06:07:05.939884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.177 ms 00:20:14.317 [2024-07-13 06:07:05.939902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.317 [2024-07-13 06:07:05.940088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.317 [2024-07-13 06:07:05.940121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:14.317 [2024-07-13 06:07:05.940173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:20:14.317 [2024-07-13 06:07:05.940214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.317 [2024-07-13 06:07:05.947619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.317 [2024-07-13 06:07:05.947661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:14.317 [2024-07-13 06:07:05.947684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.292 ms 00:20:14.317 [2024-07-13 06:07:05.947702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.317 [2024-07-13 06:07:05.947801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.317 [2024-07-13 06:07:05.947842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:14.317 [2024-07-13 06:07:05.947885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:14.317 [2024-07-13 06:07:05.947902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.317 [2024-07-13 06:07:05.948300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.317 [2024-07-13 06:07:05.948336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:14.317 [2024-07-13 06:07:05.948373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:20:14.317 [2024-07-13 06:07:05.948392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.317 [2024-07-13 06:07:05.948625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.317 [2024-07-13 06:07:05.948667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:14.317 [2024-07-13 06:07:05.948692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:20:14.317 [2024-07-13 06:07:05.948721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.317 [2024-07-13 06:07:05.953624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.317 [2024-07-13 06:07:05.953662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:14.317 [2024-07-13 06:07:05.953686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.829 ms 00:20:14.317 [2024-07-13 06:07:05.953706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.317 [2024-07-13 06:07:05.956166] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:14.317 [2024-07-13 06:07:05.956229] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:14.317 [2024-07-13 06:07:05.956255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.317 [2024-07-13 06:07:05.956272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:14.317 [2024-07-13 06:07:05.956304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.304 ms 00:20:14.317 [2024-07-13 06:07:05.956339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.317 [2024-07-13 06:07:05.971501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.317 [2024-07-13 06:07:05.971542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:14.317 [2024-07-13 06:07:05.971569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.032 ms 00:20:14.317 [2024-07-13 06:07:05.971603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.317 [2024-07-13 06:07:05.973675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.317 [2024-07-13 06:07:05.973713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:14.318 [2024-07-13 06:07:05.973767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.965 ms 00:20:14.318 [2024-07-13 06:07:05.973783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.318 [2024-07-13 06:07:05.975486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.318 [2024-07-13 06:07:05.975525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:14.318 [2024-07-13 06:07:05.975547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.592 ms 00:20:14.318 [2024-07-13 06:07:05.975565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.318 [2024-07-13 06:07:05.976111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.318 [2024-07-13 06:07:05.976183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:14.318 [2024-07-13 06:07:05.976212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:20:14.318 [2024-07-13 06:07:05.976241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.318 [2024-07-13 06:07:05.993565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.318 [2024-07-13 06:07:05.993654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:14.318 [2024-07-13 06:07:05.993682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.261 ms 00:20:14.318 [2024-07-13 06:07:05.993700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.318 [2024-07-13 06:07:06.002726] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:14.318 [2024-07-13 06:07:06.005264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.318 [2024-07-13 06:07:06.005316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:14.318 [2024-07-13 06:07:06.005343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.451 ms 00:20:14.318 [2024-07-13 06:07:06.005364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.318 [2024-07-13 06:07:06.005455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.318 [2024-07-13 06:07:06.005507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:14.318 [2024-07-13 06:07:06.005533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:14.318 [2024-07-13 06:07:06.005553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.318 [2024-07-13 06:07:06.005717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.318 [2024-07-13 06:07:06.005812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:14.318 [2024-07-13 06:07:06.005834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:14.318 [2024-07-13 06:07:06.005851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.318 [2024-07-13 06:07:06.005900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.318 [2024-07-13 06:07:06.005924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:14.318 [2024-07-13 06:07:06.005944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:14.318 [2024-07-13 06:07:06.005961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.318 [2024-07-13 06:07:06.006020] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:14.318 [2024-07-13 06:07:06.006070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.318 [2024-07-13 06:07:06.006107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:14.318 [2024-07-13 06:07:06.006127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:14.318 [2024-07-13 06:07:06.006164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.318 [2024-07-13 06:07:06.009699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.318 [2024-07-13 06:07:06.009741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:14.318 [2024-07-13 06:07:06.009767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.489 ms 00:20:14.318 [2024-07-13 06:07:06.009787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.318 [2024-07-13 06:07:06.009993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.318 [2024-07-13 06:07:06.010038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:14.318 [2024-07-13 06:07:06.010072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:20:14.318 [2024-07-13 06:07:06.010092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.318 [2024-07-13 06:07:06.011442] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 102.609 ms, result 0 00:20:59.253  Copying: 23/1024 [MB] (23 MBps) Copying: 46/1024 [MB] (22 MBps) Copying: 69/1024 [MB] (23 MBps) Copying: 93/1024 [MB] (23 MBps) Copying: 116/1024 [MB] (23 MBps) Copying: 140/1024 [MB] (23 MBps) Copying: 162/1024 [MB] (22 MBps) Copying: 185/1024 [MB] (22 MBps) Copying: 208/1024 [MB] (23 MBps) Copying: 231/1024 [MB] (23 MBps) Copying: 254/1024 [MB] (22 MBps) Copying: 277/1024 [MB] (23 MBps) Copying: 301/1024 [MB] (23 MBps) Copying: 325/1024 [MB] (23 MBps) Copying: 348/1024 [MB] (23 MBps) Copying: 371/1024 [MB] (22 MBps) Copying: 394/1024 [MB] (23 MBps) Copying: 418/1024 [MB] (23 MBps) Copying: 442/1024 [MB] (23 MBps) Copying: 465/1024 [MB] (23 MBps) Copying: 489/1024 [MB] (24 MBps) Copying: 513/1024 [MB] (23 MBps) Copying: 537/1024 [MB] (23 MBps) Copying: 561/1024 [MB] (23 MBps) Copying: 584/1024 [MB] (23 MBps) Copying: 608/1024 [MB] (23 MBps) Copying: 632/1024 [MB] (24 MBps) Copying: 654/1024 [MB] (22 MBps) Copying: 679/1024 [MB] (24 MBps) Copying: 703/1024 [MB] (24 MBps) Copying: 727/1024 [MB] (23 MBps) Copying: 751/1024 [MB] (23 MBps) Copying: 774/1024 [MB] (23 MBps) Copying: 797/1024 [MB] (23 MBps) Copying: 821/1024 [MB] (23 MBps) Copying: 845/1024 [MB] (24 MBps) Copying: 868/1024 [MB] (23 MBps) Copying: 892/1024 [MB] (23 MBps) Copying: 916/1024 [MB] (24 MBps) Copying: 940/1024 [MB] (23 MBps) Copying: 964/1024 [MB] (23 MBps) Copying: 988/1024 [MB] (24 MBps) Copying: 1013/1024 [MB] (24 MBps) Copying: 1023/1024 [MB] (10 MBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-07-13 06:07:50.751475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.253 [2024-07-13 06:07:50.751742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:59.253 [2024-07-13 06:07:50.751777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:59.253 [2024-07-13 06:07:50.751791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.253 [2024-07-13 06:07:50.753017] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:59.253 [2024-07-13 06:07:50.755689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.253 [2024-07-13 06:07:50.755757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:59.253 [2024-07-13 06:07:50.755791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.624 ms 00:20:59.253 [2024-07-13 06:07:50.755803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.253 [2024-07-13 06:07:50.770038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.253 [2024-07-13 06:07:50.770096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:59.253 [2024-07-13 06:07:50.770130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.217 ms 00:20:59.253 [2024-07-13 06:07:50.770142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.253 [2024-07-13 06:07:50.792139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.253 [2024-07-13 06:07:50.792208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:59.253 [2024-07-13 06:07:50.792242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.962 ms 00:20:59.253 [2024-07-13 06:07:50.792262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.253 [2024-07-13 06:07:50.798373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.253 [2024-07-13 06:07:50.798423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:59.253 [2024-07-13 06:07:50.798470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.069 ms 00:20:59.253 [2024-07-13 06:07:50.798481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.253 [2024-07-13 06:07:50.800016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.253 [2024-07-13 06:07:50.800086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:59.253 [2024-07-13 06:07:50.800119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.460 ms 00:20:59.254 [2024-07-13 06:07:50.800130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.254 [2024-07-13 06:07:50.803156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.254 [2024-07-13 06:07:50.803249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:59.254 [2024-07-13 06:07:50.803296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.979 ms 00:20:59.254 [2024-07-13 06:07:50.803308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.254 [2024-07-13 06:07:50.916955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.254 [2024-07-13 06:07:50.917051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:59.254 [2024-07-13 06:07:50.917125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 113.600 ms 00:20:59.254 [2024-07-13 06:07:50.917145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.254 [2024-07-13 06:07:50.919197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.254 [2024-07-13 06:07:50.919235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:59.254 [2024-07-13 06:07:50.919249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.989 ms 00:20:59.254 [2024-07-13 06:07:50.919259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.254 [2024-07-13 06:07:50.920644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.254 [2024-07-13 06:07:50.920678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:59.254 [2024-07-13 06:07:50.920693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.349 ms 00:20:59.254 [2024-07-13 06:07:50.920703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.254 [2024-07-13 06:07:50.921908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.254 [2024-07-13 06:07:50.921944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:59.254 [2024-07-13 06:07:50.921975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.171 ms 00:20:59.254 [2024-07-13 06:07:50.921985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.254 [2024-07-13 06:07:50.923103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.254 [2024-07-13 06:07:50.923196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:59.254 [2024-07-13 06:07:50.923211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.057 ms 00:20:59.254 [2024-07-13 06:07:50.923221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.254 [2024-07-13 06:07:50.923256] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:59.254 [2024-07-13 06:07:50.923297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 121856 / 261120 wr_cnt: 1 state: open 00:20:59.254 [2024-07-13 06:07:50.923319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.923995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.924007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.924019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.924031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.924042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.924053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.924064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.924076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.924093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.924104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.924115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.924126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.924138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.924150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.924175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.924189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.924200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.924213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:59.254 [2024-07-13 06:07:50.924225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:59.255 [2024-07-13 06:07:50.924236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:59.255 [2024-07-13 06:07:50.924248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:59.255 [2024-07-13 06:07:50.924259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:59.255 [2024-07-13 06:07:50.924270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:59.255 [2024-07-13 06:07:50.924282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:59.255 [2024-07-13 06:07:50.924293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:59.255 [2024-07-13 06:07:50.924305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:59.255 [2024-07-13 06:07:50.924316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:59.255 [2024-07-13 06:07:50.924328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:59.255 [2024-07-13 06:07:50.924339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:59.255 [2024-07-13 06:07:50.924350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:59.255 [2024-07-13 06:07:50.924362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:59.255 [2024-07-13 06:07:50.924373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:59.255 [2024-07-13 06:07:50.924384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:59.255 [2024-07-13 06:07:50.924395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:59.255 [2024-07-13 06:07:50.924407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:59.255 [2024-07-13 06:07:50.924418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:59.255 [2024-07-13 06:07:50.924429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:59.255 [2024-07-13 06:07:50.924441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:59.255 [2024-07-13 06:07:50.924453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:59.255 [2024-07-13 06:07:50.924464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:59.255 [2024-07-13 06:07:50.924475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:59.255 [2024-07-13 06:07:50.924487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:59.255 [2024-07-13 06:07:50.924498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:59.255 [2024-07-13 06:07:50.924509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:59.255 [2024-07-13 06:07:50.924521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:59.255 [2024-07-13 06:07:50.924541] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:59.255 [2024-07-13 06:07:50.924552] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b0bf905e-23fc-4217-a19b-6c8f1c14c1b0 00:20:59.255 [2024-07-13 06:07:50.924569] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 121856 00:20:59.255 [2024-07-13 06:07:50.924580] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 122816 00:20:59.255 [2024-07-13 06:07:50.924591] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 121856 00:20:59.255 [2024-07-13 06:07:50.924603] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0079 00:20:59.255 [2024-07-13 06:07:50.924613] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:59.255 [2024-07-13 06:07:50.924625] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:59.255 [2024-07-13 06:07:50.924635] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:59.255 [2024-07-13 06:07:50.924645] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:59.255 [2024-07-13 06:07:50.924655] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:59.255 [2024-07-13 06:07:50.924666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.255 [2024-07-13 06:07:50.924678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:59.255 [2024-07-13 06:07:50.924689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.412 ms 00:20:59.255 [2024-07-13 06:07:50.924700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.255 [2024-07-13 06:07:50.926179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.255 [2024-07-13 06:07:50.926226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:59.255 [2024-07-13 06:07:50.926240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.452 ms 00:20:59.255 [2024-07-13 06:07:50.926263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.255 [2024-07-13 06:07:50.926363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.255 [2024-07-13 06:07:50.926384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:59.255 [2024-07-13 06:07:50.926400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:59.255 [2024-07-13 06:07:50.926410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.255 [2024-07-13 06:07:50.930865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.255 [2024-07-13 06:07:50.930918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:59.255 [2024-07-13 06:07:50.930949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.255 [2024-07-13 06:07:50.930960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.255 [2024-07-13 06:07:50.931014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.255 [2024-07-13 06:07:50.931028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:59.255 [2024-07-13 06:07:50.931045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.255 [2024-07-13 06:07:50.931056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.255 [2024-07-13 06:07:50.931172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.255 [2024-07-13 06:07:50.931207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:59.255 [2024-07-13 06:07:50.931219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.255 [2024-07-13 06:07:50.931231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.255 [2024-07-13 06:07:50.931253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.255 [2024-07-13 06:07:50.931267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:59.255 [2024-07-13 06:07:50.931278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.255 [2024-07-13 06:07:50.931295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.255 [2024-07-13 06:07:50.938995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.255 [2024-07-13 06:07:50.939063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:59.255 [2024-07-13 06:07:50.939095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.255 [2024-07-13 06:07:50.939106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.255 [2024-07-13 06:07:50.945560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.255 [2024-07-13 06:07:50.945623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:59.255 [2024-07-13 06:07:50.945672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.255 [2024-07-13 06:07:50.945692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.255 [2024-07-13 06:07:50.945742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.255 [2024-07-13 06:07:50.945756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:59.255 [2024-07-13 06:07:50.945766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.255 [2024-07-13 06:07:50.945775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.255 [2024-07-13 06:07:50.945833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.255 [2024-07-13 06:07:50.945847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:59.255 [2024-07-13 06:07:50.945858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.255 [2024-07-13 06:07:50.945868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.255 [2024-07-13 06:07:50.946122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.255 [2024-07-13 06:07:50.946241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:59.255 [2024-07-13 06:07:50.946258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.255 [2024-07-13 06:07:50.946269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.255 [2024-07-13 06:07:50.946325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.255 [2024-07-13 06:07:50.946343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:59.255 [2024-07-13 06:07:50.946363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.255 [2024-07-13 06:07:50.946374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.255 [2024-07-13 06:07:50.946447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.255 [2024-07-13 06:07:50.946468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:59.255 [2024-07-13 06:07:50.946481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.255 [2024-07-13 06:07:50.946491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.255 [2024-07-13 06:07:50.946547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.255 [2024-07-13 06:07:50.946565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:59.255 [2024-07-13 06:07:50.946577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.255 [2024-07-13 06:07:50.946588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.255 [2024-07-13 06:07:50.946798] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 197.600 ms, result 0 00:20:59.823 00:20:59.823 00:20:59.823 06:07:51 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:00.081 [2024-07-13 06:07:51.550110] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:21:00.081 [2024-07-13 06:07:51.550345] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92792 ] 00:21:00.081 [2024-07-13 06:07:51.697904] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.081 [2024-07-13 06:07:51.733275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.341 [2024-07-13 06:07:51.816779] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:00.341 [2024-07-13 06:07:51.816885] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:00.341 [2024-07-13 06:07:51.974848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.341 [2024-07-13 06:07:51.974921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:00.341 [2024-07-13 06:07:51.974958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:00.341 [2024-07-13 06:07:51.974970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.341 [2024-07-13 06:07:51.975041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.341 [2024-07-13 06:07:51.975066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:00.341 [2024-07-13 06:07:51.975083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:21:00.341 [2024-07-13 06:07:51.975094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.341 [2024-07-13 06:07:51.975124] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:00.341 [2024-07-13 06:07:51.975446] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:00.341 [2024-07-13 06:07:51.975477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.341 [2024-07-13 06:07:51.975493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:00.341 [2024-07-13 06:07:51.975508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:21:00.341 [2024-07-13 06:07:51.975519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.341 [2024-07-13 06:07:51.976810] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:00.341 [2024-07-13 06:07:51.979188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.341 [2024-07-13 06:07:51.979263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:00.341 [2024-07-13 06:07:51.979286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.380 ms 00:21:00.341 [2024-07-13 06:07:51.979297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.341 [2024-07-13 06:07:51.979398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.341 [2024-07-13 06:07:51.979434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:00.341 [2024-07-13 06:07:51.979468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:21:00.341 [2024-07-13 06:07:51.979480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.341 [2024-07-13 06:07:51.984130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.341 [2024-07-13 06:07:51.984196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:00.341 [2024-07-13 06:07:51.984213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.552 ms 00:21:00.341 [2024-07-13 06:07:51.984224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.341 [2024-07-13 06:07:51.984344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.341 [2024-07-13 06:07:51.984380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:00.342 [2024-07-13 06:07:51.984393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:21:00.342 [2024-07-13 06:07:51.984415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.342 [2024-07-13 06:07:51.984516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.342 [2024-07-13 06:07:51.984550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:00.342 [2024-07-13 06:07:51.984569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:00.342 [2024-07-13 06:07:51.984581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.342 [2024-07-13 06:07:51.984616] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:00.342 [2024-07-13 06:07:51.986011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.342 [2024-07-13 06:07:51.986063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:00.342 [2024-07-13 06:07:51.986078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.404 ms 00:21:00.342 [2024-07-13 06:07:51.986089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.342 [2024-07-13 06:07:51.986200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.342 [2024-07-13 06:07:51.986224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:00.342 [2024-07-13 06:07:51.986237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:00.342 [2024-07-13 06:07:51.986248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.342 [2024-07-13 06:07:51.986295] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:00.342 [2024-07-13 06:07:51.986337] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:00.342 [2024-07-13 06:07:51.986388] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:00.342 [2024-07-13 06:07:51.986421] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:00.342 [2024-07-13 06:07:51.986542] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:00.342 [2024-07-13 06:07:51.986559] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:00.342 [2024-07-13 06:07:51.986573] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:00.342 [2024-07-13 06:07:51.986588] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:00.342 [2024-07-13 06:07:51.986601] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:00.342 [2024-07-13 06:07:51.986614] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:00.342 [2024-07-13 06:07:51.986626] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:00.342 [2024-07-13 06:07:51.986636] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:00.342 [2024-07-13 06:07:51.986647] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:00.342 [2024-07-13 06:07:51.986660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.342 [2024-07-13 06:07:51.986672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:00.342 [2024-07-13 06:07:51.986688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:21:00.342 [2024-07-13 06:07:51.986700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.342 [2024-07-13 06:07:51.986804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.342 [2024-07-13 06:07:51.986823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:00.342 [2024-07-13 06:07:51.986835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:00.342 [2024-07-13 06:07:51.986846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.342 [2024-07-13 06:07:51.986955] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:00.342 [2024-07-13 06:07:51.986973] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:00.342 [2024-07-13 06:07:51.986985] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:00.342 [2024-07-13 06:07:51.987001] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.342 [2024-07-13 06:07:51.987013] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:00.342 [2024-07-13 06:07:51.987023] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:00.342 [2024-07-13 06:07:51.987034] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:00.342 [2024-07-13 06:07:51.987045] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:00.342 [2024-07-13 06:07:51.987055] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:00.342 [2024-07-13 06:07:51.987065] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:00.342 [2024-07-13 06:07:51.987076] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:00.342 [2024-07-13 06:07:51.987086] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:00.342 [2024-07-13 06:07:51.987101] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:00.342 [2024-07-13 06:07:51.987114] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:00.342 [2024-07-13 06:07:51.987125] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:00.342 [2024-07-13 06:07:51.987175] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.342 [2024-07-13 06:07:51.987189] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:00.342 [2024-07-13 06:07:51.987199] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:00.342 [2024-07-13 06:07:51.987209] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.342 [2024-07-13 06:07:51.987220] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:00.342 [2024-07-13 06:07:51.987230] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:00.342 [2024-07-13 06:07:51.987240] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.342 [2024-07-13 06:07:51.987251] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:00.342 [2024-07-13 06:07:51.987261] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:00.342 [2024-07-13 06:07:51.987270] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.342 [2024-07-13 06:07:51.987280] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:00.342 [2024-07-13 06:07:51.987290] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:00.342 [2024-07-13 06:07:51.987300] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.342 [2024-07-13 06:07:51.987315] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:00.342 [2024-07-13 06:07:51.987327] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:00.342 [2024-07-13 06:07:51.987336] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.342 [2024-07-13 06:07:51.987346] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:00.342 [2024-07-13 06:07:51.987356] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:00.342 [2024-07-13 06:07:51.987366] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:00.342 [2024-07-13 06:07:51.987376] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:00.342 [2024-07-13 06:07:51.987386] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:00.342 [2024-07-13 06:07:51.987396] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:00.342 [2024-07-13 06:07:51.987406] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:00.342 [2024-07-13 06:07:51.987415] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:00.342 [2024-07-13 06:07:51.987425] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.342 [2024-07-13 06:07:51.987435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:00.342 [2024-07-13 06:07:51.987446] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:00.342 [2024-07-13 06:07:51.987455] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.342 [2024-07-13 06:07:51.987465] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:00.342 [2024-07-13 06:07:51.987492] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:00.342 [2024-07-13 06:07:51.987506] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:00.342 [2024-07-13 06:07:51.987517] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.342 [2024-07-13 06:07:51.987528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:00.342 [2024-07-13 06:07:51.987539] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:00.342 [2024-07-13 06:07:51.987549] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:00.342 [2024-07-13 06:07:51.987559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:00.342 [2024-07-13 06:07:51.987570] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:00.342 [2024-07-13 06:07:51.987580] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:00.342 [2024-07-13 06:07:51.987592] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:00.342 [2024-07-13 06:07:51.987606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:00.342 [2024-07-13 06:07:51.987628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:00.342 [2024-07-13 06:07:51.987640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:00.342 [2024-07-13 06:07:51.987651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:00.342 [2024-07-13 06:07:51.987662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:00.342 [2024-07-13 06:07:51.987673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:00.342 [2024-07-13 06:07:51.987688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:00.342 [2024-07-13 06:07:51.987700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:00.342 [2024-07-13 06:07:51.987711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:00.342 [2024-07-13 06:07:51.987722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:00.342 [2024-07-13 06:07:51.987733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:00.342 [2024-07-13 06:07:51.987745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:00.343 [2024-07-13 06:07:51.987756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:00.343 [2024-07-13 06:07:51.987767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:00.343 [2024-07-13 06:07:51.987778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:00.343 [2024-07-13 06:07:51.987789] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:00.343 [2024-07-13 06:07:51.987801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:00.343 [2024-07-13 06:07:51.987825] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:00.343 [2024-07-13 06:07:51.987836] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:00.343 [2024-07-13 06:07:51.987847] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:00.343 [2024-07-13 06:07:51.987859] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:00.343 [2024-07-13 06:07:51.987871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.343 [2024-07-13 06:07:51.987887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:00.343 [2024-07-13 06:07:51.987911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.984 ms 00:21:00.343 [2024-07-13 06:07:51.987922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.343 [2024-07-13 06:07:52.004955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.343 [2024-07-13 06:07:52.005027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:00.343 [2024-07-13 06:07:52.005047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.952 ms 00:21:00.343 [2024-07-13 06:07:52.005064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.343 [2024-07-13 06:07:52.005211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.343 [2024-07-13 06:07:52.005232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:00.343 [2024-07-13 06:07:52.005245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:21:00.343 [2024-07-13 06:07:52.005262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.343 [2024-07-13 06:07:52.013078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.343 [2024-07-13 06:07:52.013158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:00.343 [2024-07-13 06:07:52.013205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.710 ms 00:21:00.343 [2024-07-13 06:07:52.013231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.343 [2024-07-13 06:07:52.013284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.343 [2024-07-13 06:07:52.013301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:00.343 [2024-07-13 06:07:52.013330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:00.343 [2024-07-13 06:07:52.013342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.343 [2024-07-13 06:07:52.013694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.343 [2024-07-13 06:07:52.013732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:00.343 [2024-07-13 06:07:52.013748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:21:00.343 [2024-07-13 06:07:52.013759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.343 [2024-07-13 06:07:52.013920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.343 [2024-07-13 06:07:52.013941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:00.343 [2024-07-13 06:07:52.013954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:21:00.343 [2024-07-13 06:07:52.013970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.343 [2024-07-13 06:07:52.018737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.343 [2024-07-13 06:07:52.018792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:00.343 [2024-07-13 06:07:52.018808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.739 ms 00:21:00.343 [2024-07-13 06:07:52.018832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.343 [2024-07-13 06:07:52.021267] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:00.343 [2024-07-13 06:07:52.021315] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:00.343 [2024-07-13 06:07:52.021334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.343 [2024-07-13 06:07:52.021351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:00.343 [2024-07-13 06:07:52.021365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.363 ms 00:21:00.343 [2024-07-13 06:07:52.021376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.343 [2024-07-13 06:07:52.036229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.343 [2024-07-13 06:07:52.036284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:00.343 [2024-07-13 06:07:52.036301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.805 ms 00:21:00.343 [2024-07-13 06:07:52.036317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.343 [2024-07-13 06:07:52.038119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.343 [2024-07-13 06:07:52.038185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:00.343 [2024-07-13 06:07:52.038201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.738 ms 00:21:00.343 [2024-07-13 06:07:52.038211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.343 [2024-07-13 06:07:52.039863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.343 [2024-07-13 06:07:52.039914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:00.343 [2024-07-13 06:07:52.039928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.611 ms 00:21:00.343 [2024-07-13 06:07:52.039938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.343 [2024-07-13 06:07:52.040375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.343 [2024-07-13 06:07:52.040425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:00.343 [2024-07-13 06:07:52.040441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:21:00.343 [2024-07-13 06:07:52.040457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.343 [2024-07-13 06:07:52.056676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.343 [2024-07-13 06:07:52.056772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:00.343 [2024-07-13 06:07:52.056792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.193 ms 00:21:00.343 [2024-07-13 06:07:52.056814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.343 [2024-07-13 06:07:52.064919] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:00.602 [2024-07-13 06:07:52.067564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.602 [2024-07-13 06:07:52.067615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:00.602 [2024-07-13 06:07:52.067643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.681 ms 00:21:00.602 [2024-07-13 06:07:52.067655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.602 [2024-07-13 06:07:52.067729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.602 [2024-07-13 06:07:52.067748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:00.602 [2024-07-13 06:07:52.067778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:00.602 [2024-07-13 06:07:52.067789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.602 [2024-07-13 06:07:52.069666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.602 [2024-07-13 06:07:52.069724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:00.602 [2024-07-13 06:07:52.069753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.762 ms 00:21:00.602 [2024-07-13 06:07:52.069764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.602 [2024-07-13 06:07:52.069806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.602 [2024-07-13 06:07:52.069831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:00.602 [2024-07-13 06:07:52.069842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:00.602 [2024-07-13 06:07:52.069852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.602 [2024-07-13 06:07:52.069892] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:00.602 [2024-07-13 06:07:52.069917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.602 [2024-07-13 06:07:52.069942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:00.602 [2024-07-13 06:07:52.069953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:00.602 [2024-07-13 06:07:52.069978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.602 [2024-07-13 06:07:52.073802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.602 [2024-07-13 06:07:52.073857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:00.602 [2024-07-13 06:07:52.073874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.769 ms 00:21:00.602 [2024-07-13 06:07:52.073891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.602 [2024-07-13 06:07:52.073967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.602 [2024-07-13 06:07:52.073996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:00.602 [2024-07-13 06:07:52.074013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:00.602 [2024-07-13 06:07:52.074023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.602 [2024-07-13 06:07:52.081987] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 105.440 ms, result 0 00:21:42.626  Copying: 23/1024 [MB] (23 MBps) Copying: 48/1024 [MB] (24 MBps) Copying: 72/1024 [MB] (24 MBps) Copying: 97/1024 [MB] (24 MBps) Copying: 121/1024 [MB] (24 MBps) Copying: 145/1024 [MB] (24 MBps) Copying: 169/1024 [MB] (24 MBps) Copying: 194/1024 [MB] (24 MBps) Copying: 218/1024 [MB] (24 MBps) Copying: 243/1024 [MB] (24 MBps) Copying: 267/1024 [MB] (24 MBps) Copying: 291/1024 [MB] (24 MBps) Copying: 316/1024 [MB] (24 MBps) Copying: 340/1024 [MB] (24 MBps) Copying: 365/1024 [MB] (24 MBps) Copying: 390/1024 [MB] (24 MBps) Copying: 414/1024 [MB] (24 MBps) Copying: 438/1024 [MB] (24 MBps) Copying: 462/1024 [MB] (23 MBps) Copying: 487/1024 [MB] (24 MBps) Copying: 511/1024 [MB] (24 MBps) Copying: 536/1024 [MB] (25 MBps) Copying: 561/1024 [MB] (24 MBps) Copying: 585/1024 [MB] (24 MBps) Copying: 610/1024 [MB] (24 MBps) Copying: 634/1024 [MB] (24 MBps) Copying: 659/1024 [MB] (24 MBps) Copying: 683/1024 [MB] (24 MBps) Copying: 708/1024 [MB] (25 MBps) Copying: 733/1024 [MB] (24 MBps) Copying: 758/1024 [MB] (24 MBps) Copying: 782/1024 [MB] (24 MBps) Copying: 807/1024 [MB] (24 MBps) Copying: 832/1024 [MB] (24 MBps) Copying: 857/1024 [MB] (25 MBps) Copying: 882/1024 [MB] (24 MBps) Copying: 906/1024 [MB] (24 MBps) Copying: 931/1024 [MB] (24 MBps) Copying: 955/1024 [MB] (24 MBps) Copying: 979/1024 [MB] (24 MBps) Copying: 1004/1024 [MB] (24 MBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-13 06:08:34.142584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.626 [2024-07-13 06:08:34.142652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:42.626 [2024-07-13 06:08:34.142673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:42.626 [2024-07-13 06:08:34.142685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.626 [2024-07-13 06:08:34.142714] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:42.626 [2024-07-13 06:08:34.143226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.626 [2024-07-13 06:08:34.143260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:42.626 [2024-07-13 06:08:34.143276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.492 ms 00:21:42.626 [2024-07-13 06:08:34.143287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.626 [2024-07-13 06:08:34.143531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.626 [2024-07-13 06:08:34.143554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:42.626 [2024-07-13 06:08:34.143568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:21:42.626 [2024-07-13 06:08:34.143579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.626 [2024-07-13 06:08:34.148573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.626 [2024-07-13 06:08:34.148618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:42.626 [2024-07-13 06:08:34.148635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.972 ms 00:21:42.626 [2024-07-13 06:08:34.148654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.626 [2024-07-13 06:08:34.155199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.626 [2024-07-13 06:08:34.155235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:42.626 [2024-07-13 06:08:34.155265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.501 ms 00:21:42.626 [2024-07-13 06:08:34.155276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.626 [2024-07-13 06:08:34.156677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.626 [2024-07-13 06:08:34.156730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:42.626 [2024-07-13 06:08:34.156761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.283 ms 00:21:42.626 [2024-07-13 06:08:34.156771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.626 [2024-07-13 06:08:34.159870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.626 [2024-07-13 06:08:34.159923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:42.626 [2024-07-13 06:08:34.159960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.049 ms 00:21:42.626 [2024-07-13 06:08:34.159971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.626 [2024-07-13 06:08:34.279955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.626 [2024-07-13 06:08:34.280017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:42.626 [2024-07-13 06:08:34.280050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 119.946 ms 00:21:42.626 [2024-07-13 06:08:34.280062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.627 [2024-07-13 06:08:34.281881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.627 [2024-07-13 06:08:34.281917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:42.627 [2024-07-13 06:08:34.281931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.798 ms 00:21:42.627 [2024-07-13 06:08:34.281941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.627 [2024-07-13 06:08:34.283374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.627 [2024-07-13 06:08:34.283409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:42.627 [2024-07-13 06:08:34.283422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.398 ms 00:21:42.627 [2024-07-13 06:08:34.283431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.627 [2024-07-13 06:08:34.284562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.627 [2024-07-13 06:08:34.284595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:42.627 [2024-07-13 06:08:34.284608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.097 ms 00:21:42.627 [2024-07-13 06:08:34.284637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.627 [2024-07-13 06:08:34.285847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.627 [2024-07-13 06:08:34.285883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:42.627 [2024-07-13 06:08:34.285896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.145 ms 00:21:42.627 [2024-07-13 06:08:34.285906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.627 [2024-07-13 06:08:34.285941] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:42.627 [2024-07-13 06:08:34.285962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:21:42.627 [2024-07-13 06:08:34.285975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.285986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.285997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:42.627 [2024-07-13 06:08:34.286970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:42.628 [2024-07-13 06:08:34.286981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:42.628 [2024-07-13 06:08:34.286992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:42.628 [2024-07-13 06:08:34.287003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:42.628 [2024-07-13 06:08:34.287014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:42.628 [2024-07-13 06:08:34.287025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:42.628 [2024-07-13 06:08:34.287036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:42.628 [2024-07-13 06:08:34.287047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:42.628 [2024-07-13 06:08:34.287058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:42.628 [2024-07-13 06:08:34.287069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:42.628 [2024-07-13 06:08:34.287080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:42.628 [2024-07-13 06:08:34.287090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:42.628 [2024-07-13 06:08:34.287101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:42.628 [2024-07-13 06:08:34.287113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:42.628 [2024-07-13 06:08:34.287124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:42.628 [2024-07-13 06:08:34.287136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:42.628 [2024-07-13 06:08:34.287147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:42.628 [2024-07-13 06:08:34.287169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:42.628 [2024-07-13 06:08:34.287183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:42.628 [2024-07-13 06:08:34.287203] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:42.628 [2024-07-13 06:08:34.287224] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b0bf905e-23fc-4217-a19b-6c8f1c14c1b0 00:21:42.628 [2024-07-13 06:08:34.287240] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:21:42.628 [2024-07-13 06:08:34.287251] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 12736 00:21:42.628 [2024-07-13 06:08:34.287269] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 11776 00:21:42.628 [2024-07-13 06:08:34.287280] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0815 00:21:42.628 [2024-07-13 06:08:34.287290] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:42.628 [2024-07-13 06:08:34.287301] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:42.628 [2024-07-13 06:08:34.287311] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:42.628 [2024-07-13 06:08:34.287320] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:42.628 [2024-07-13 06:08:34.287330] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:42.628 [2024-07-13 06:08:34.287340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.628 [2024-07-13 06:08:34.287352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:42.628 [2024-07-13 06:08:34.287363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.401 ms 00:21:42.628 [2024-07-13 06:08:34.287374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.628 [2024-07-13 06:08:34.288739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.628 [2024-07-13 06:08:34.288787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:42.628 [2024-07-13 06:08:34.288801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.343 ms 00:21:42.628 [2024-07-13 06:08:34.288812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.628 [2024-07-13 06:08:34.288894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.628 [2024-07-13 06:08:34.288926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:42.628 [2024-07-13 06:08:34.288938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:21:42.628 [2024-07-13 06:08:34.288952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.628 [2024-07-13 06:08:34.293158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.628 [2024-07-13 06:08:34.293231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:42.628 [2024-07-13 06:08:34.293247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.628 [2024-07-13 06:08:34.293259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.628 [2024-07-13 06:08:34.293319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.628 [2024-07-13 06:08:34.293335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:42.628 [2024-07-13 06:08:34.293347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.628 [2024-07-13 06:08:34.293365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.628 [2024-07-13 06:08:34.293417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.628 [2024-07-13 06:08:34.293434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:42.628 [2024-07-13 06:08:34.293446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.628 [2024-07-13 06:08:34.293457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.628 [2024-07-13 06:08:34.293487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.628 [2024-07-13 06:08:34.293503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:42.628 [2024-07-13 06:08:34.293514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.628 [2024-07-13 06:08:34.293526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.628 [2024-07-13 06:08:34.301045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.628 [2024-07-13 06:08:34.301113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:42.628 [2024-07-13 06:08:34.301155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.628 [2024-07-13 06:08:34.301167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.628 [2024-07-13 06:08:34.307382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.628 [2024-07-13 06:08:34.307447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:42.628 [2024-07-13 06:08:34.307463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.628 [2024-07-13 06:08:34.307484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.628 [2024-07-13 06:08:34.307551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.628 [2024-07-13 06:08:34.307582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:42.628 [2024-07-13 06:08:34.307609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.628 [2024-07-13 06:08:34.307636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.628 [2024-07-13 06:08:34.307666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.628 [2024-07-13 06:08:34.307679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:42.628 [2024-07-13 06:08:34.307690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.628 [2024-07-13 06:08:34.307701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.628 [2024-07-13 06:08:34.307788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.628 [2024-07-13 06:08:34.307815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:42.628 [2024-07-13 06:08:34.307828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.628 [2024-07-13 06:08:34.307839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.628 [2024-07-13 06:08:34.307892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.628 [2024-07-13 06:08:34.307909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:42.628 [2024-07-13 06:08:34.307920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.628 [2024-07-13 06:08:34.307931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.628 [2024-07-13 06:08:34.307987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.628 [2024-07-13 06:08:34.308013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:42.628 [2024-07-13 06:08:34.308037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.628 [2024-07-13 06:08:34.308047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.628 [2024-07-13 06:08:34.308099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.628 [2024-07-13 06:08:34.308116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:42.628 [2024-07-13 06:08:34.308127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.628 [2024-07-13 06:08:34.308180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.628 [2024-07-13 06:08:34.308354] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 165.715 ms, result 0 00:21:42.888 00:21:42.888 00:21:42.888 06:08:34 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:45.420 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:45.420 06:08:36 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:21:45.420 06:08:36 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:21:45.420 06:08:36 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:45.420 06:08:36 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:45.420 06:08:36 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:45.420 06:08:36 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 91216 00:21:45.420 06:08:36 ftl.ftl_restore -- common/autotest_common.sh@948 -- # '[' -z 91216 ']' 00:21:45.420 06:08:36 ftl.ftl_restore -- common/autotest_common.sh@952 -- # kill -0 91216 00:21:45.420 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (91216) - No such process 00:21:45.420 Process with pid 91216 is not found 00:21:45.420 06:08:36 ftl.ftl_restore -- common/autotest_common.sh@975 -- # echo 'Process with pid 91216 is not found' 00:21:45.420 06:08:36 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:21:45.420 Remove shared memory files 00:21:45.420 06:08:36 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:45.420 06:08:36 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:21:45.420 06:08:36 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:21:45.420 06:08:36 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:21:45.420 06:08:36 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:45.420 06:08:36 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:21:45.420 00:21:45.420 real 3m18.537s 00:21:45.420 user 3m5.634s 00:21:45.420 sys 0m14.685s 00:21:45.420 06:08:36 ftl.ftl_restore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:45.420 06:08:36 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:45.420 ************************************ 00:21:45.420 END TEST ftl_restore 00:21:45.420 ************************************ 00:21:45.420 06:08:36 ftl -- common/autotest_common.sh@1142 -- # return 0 00:21:45.420 06:08:36 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:45.420 06:08:36 ftl -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:45.420 06:08:36 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:45.420 06:08:36 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:45.420 ************************************ 00:21:45.420 START TEST ftl_dirty_shutdown 00:21:45.420 ************************************ 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:45.420 * Looking for test storage... 00:21:45.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=93299 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 93299 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@829 -- # '[' -z 93299 ']' 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.420 06:08:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:45.421 06:08:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:45.421 [2024-07-13 06:08:36.953058] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:21:45.421 [2024-07-13 06:08:36.953273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93299 ] 00:21:45.421 [2024-07-13 06:08:37.098546] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.421 [2024-07-13 06:08:37.135351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.357 06:08:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:46.357 06:08:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # return 0 00:21:46.357 06:08:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:46.357 06:08:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:21:46.357 06:08:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:46.357 06:08:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:21:46.357 06:08:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:21:46.357 06:08:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:46.616 06:08:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:46.616 06:08:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:21:46.616 06:08:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:46.616 06:08:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:21:46.616 06:08:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:46.616 06:08:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:46.616 06:08:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:46.616 06:08:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:46.874 06:08:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:46.874 { 00:21:46.874 "name": "nvme0n1", 00:21:46.874 "aliases": [ 00:21:46.874 "85d96257-b8d5-4e7e-9f48-56042b7cb1be" 00:21:46.874 ], 00:21:46.874 "product_name": "NVMe disk", 00:21:46.874 "block_size": 4096, 00:21:46.874 "num_blocks": 1310720, 00:21:46.874 "uuid": "85d96257-b8d5-4e7e-9f48-56042b7cb1be", 00:21:46.874 "assigned_rate_limits": { 00:21:46.874 "rw_ios_per_sec": 0, 00:21:46.874 "rw_mbytes_per_sec": 0, 00:21:46.874 "r_mbytes_per_sec": 0, 00:21:46.874 "w_mbytes_per_sec": 0 00:21:46.874 }, 00:21:46.874 "claimed": true, 00:21:46.874 "claim_type": "read_many_write_one", 00:21:46.874 "zoned": false, 00:21:46.874 "supported_io_types": { 00:21:46.874 "read": true, 00:21:46.874 "write": true, 00:21:46.874 "unmap": true, 00:21:46.874 "flush": true, 00:21:46.874 "reset": true, 00:21:46.874 "nvme_admin": true, 00:21:46.874 "nvme_io": true, 00:21:46.874 "nvme_io_md": false, 00:21:46.874 "write_zeroes": true, 00:21:46.874 "zcopy": false, 00:21:46.874 "get_zone_info": false, 00:21:46.874 "zone_management": false, 00:21:46.874 "zone_append": false, 00:21:46.874 "compare": true, 00:21:46.874 "compare_and_write": false, 00:21:46.874 "abort": true, 00:21:46.874 "seek_hole": false, 00:21:46.874 "seek_data": false, 00:21:46.874 "copy": true, 00:21:46.875 "nvme_iov_md": false 00:21:46.875 }, 00:21:46.875 "driver_specific": { 00:21:46.875 "nvme": [ 00:21:46.875 { 00:21:46.875 "pci_address": "0000:00:11.0", 00:21:46.875 "trid": { 00:21:46.875 "trtype": "PCIe", 00:21:46.875 "traddr": "0000:00:11.0" 00:21:46.875 }, 00:21:46.875 "ctrlr_data": { 00:21:46.875 "cntlid": 0, 00:21:46.875 "vendor_id": "0x1b36", 00:21:46.875 "model_number": "QEMU NVMe Ctrl", 00:21:46.875 "serial_number": "12341", 00:21:46.875 "firmware_revision": "8.0.0", 00:21:46.875 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:46.875 "oacs": { 00:21:46.875 "security": 0, 00:21:46.875 "format": 1, 00:21:46.875 "firmware": 0, 00:21:46.875 "ns_manage": 1 00:21:46.875 }, 00:21:46.875 "multi_ctrlr": false, 00:21:46.875 "ana_reporting": false 00:21:46.875 }, 00:21:46.875 "vs": { 00:21:46.875 "nvme_version": "1.4" 00:21:46.875 }, 00:21:46.875 "ns_data": { 00:21:46.875 "id": 1, 00:21:46.875 "can_share": false 00:21:46.875 } 00:21:46.875 } 00:21:46.875 ], 00:21:46.875 "mp_policy": "active_passive" 00:21:46.875 } 00:21:46.875 } 00:21:46.875 ]' 00:21:46.875 06:08:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:46.875 06:08:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:46.875 06:08:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:46.875 06:08:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:21:46.875 06:08:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:21:46.875 06:08:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:21:46.875 06:08:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:21:46.875 06:08:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:46.875 06:08:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:21:46.875 06:08:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:46.875 06:08:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:47.134 06:08:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=c6d74a56-0b76-4eb7-8058-6a682783f761 00:21:47.134 06:08:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:21:47.134 06:08:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c6d74a56-0b76-4eb7-8058-6a682783f761 00:21:47.392 06:08:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:47.651 06:08:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=6fc340ec-97ca-4c1c-b204-cf9da6b00e2e 00:21:47.651 06:08:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6fc340ec-97ca-4c1c-b204-cf9da6b00e2e 00:21:47.910 06:08:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=41cffdd4-78a6-4838-9119-4f3bd0599697 00:21:47.910 06:08:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:21:47.910 06:08:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 41cffdd4-78a6-4838-9119-4f3bd0599697 00:21:47.910 06:08:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:21:47.910 06:08:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:47.910 06:08:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=41cffdd4-78a6-4838-9119-4f3bd0599697 00:21:47.910 06:08:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:21:47.910 06:08:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 41cffdd4-78a6-4838-9119-4f3bd0599697 00:21:47.910 06:08:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=41cffdd4-78a6-4838-9119-4f3bd0599697 00:21:47.910 06:08:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:47.910 06:08:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:47.910 06:08:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:47.910 06:08:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 41cffdd4-78a6-4838-9119-4f3bd0599697 00:21:48.169 06:08:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:48.169 { 00:21:48.169 "name": "41cffdd4-78a6-4838-9119-4f3bd0599697", 00:21:48.169 "aliases": [ 00:21:48.169 "lvs/nvme0n1p0" 00:21:48.169 ], 00:21:48.169 "product_name": "Logical Volume", 00:21:48.169 "block_size": 4096, 00:21:48.169 "num_blocks": 26476544, 00:21:48.169 "uuid": "41cffdd4-78a6-4838-9119-4f3bd0599697", 00:21:48.169 "assigned_rate_limits": { 00:21:48.169 "rw_ios_per_sec": 0, 00:21:48.169 "rw_mbytes_per_sec": 0, 00:21:48.169 "r_mbytes_per_sec": 0, 00:21:48.169 "w_mbytes_per_sec": 0 00:21:48.169 }, 00:21:48.169 "claimed": false, 00:21:48.169 "zoned": false, 00:21:48.169 "supported_io_types": { 00:21:48.169 "read": true, 00:21:48.169 "write": true, 00:21:48.169 "unmap": true, 00:21:48.169 "flush": false, 00:21:48.169 "reset": true, 00:21:48.169 "nvme_admin": false, 00:21:48.169 "nvme_io": false, 00:21:48.169 "nvme_io_md": false, 00:21:48.169 "write_zeroes": true, 00:21:48.169 "zcopy": false, 00:21:48.169 "get_zone_info": false, 00:21:48.169 "zone_management": false, 00:21:48.169 "zone_append": false, 00:21:48.169 "compare": false, 00:21:48.169 "compare_and_write": false, 00:21:48.169 "abort": false, 00:21:48.169 "seek_hole": true, 00:21:48.169 "seek_data": true, 00:21:48.169 "copy": false, 00:21:48.169 "nvme_iov_md": false 00:21:48.169 }, 00:21:48.169 "driver_specific": { 00:21:48.169 "lvol": { 00:21:48.169 "lvol_store_uuid": "6fc340ec-97ca-4c1c-b204-cf9da6b00e2e", 00:21:48.169 "base_bdev": "nvme0n1", 00:21:48.169 "thin_provision": true, 00:21:48.169 "num_allocated_clusters": 0, 00:21:48.169 "snapshot": false, 00:21:48.169 "clone": false, 00:21:48.169 "esnap_clone": false 00:21:48.169 } 00:21:48.169 } 00:21:48.169 } 00:21:48.169 ]' 00:21:48.169 06:08:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:48.169 06:08:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:48.169 06:08:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:48.169 06:08:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:48.169 06:08:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:48.169 06:08:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:21:48.169 06:08:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:21:48.169 06:08:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:21:48.169 06:08:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:48.736 06:08:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:48.736 06:08:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:48.736 06:08:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 41cffdd4-78a6-4838-9119-4f3bd0599697 00:21:48.736 06:08:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=41cffdd4-78a6-4838-9119-4f3bd0599697 00:21:48.736 06:08:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:48.736 06:08:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:48.736 06:08:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:48.737 06:08:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 41cffdd4-78a6-4838-9119-4f3bd0599697 00:21:48.737 06:08:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:48.737 { 00:21:48.737 "name": "41cffdd4-78a6-4838-9119-4f3bd0599697", 00:21:48.737 "aliases": [ 00:21:48.737 "lvs/nvme0n1p0" 00:21:48.737 ], 00:21:48.737 "product_name": "Logical Volume", 00:21:48.737 "block_size": 4096, 00:21:48.737 "num_blocks": 26476544, 00:21:48.737 "uuid": "41cffdd4-78a6-4838-9119-4f3bd0599697", 00:21:48.737 "assigned_rate_limits": { 00:21:48.737 "rw_ios_per_sec": 0, 00:21:48.737 "rw_mbytes_per_sec": 0, 00:21:48.737 "r_mbytes_per_sec": 0, 00:21:48.737 "w_mbytes_per_sec": 0 00:21:48.737 }, 00:21:48.737 "claimed": false, 00:21:48.737 "zoned": false, 00:21:48.737 "supported_io_types": { 00:21:48.737 "read": true, 00:21:48.737 "write": true, 00:21:48.737 "unmap": true, 00:21:48.737 "flush": false, 00:21:48.737 "reset": true, 00:21:48.737 "nvme_admin": false, 00:21:48.737 "nvme_io": false, 00:21:48.737 "nvme_io_md": false, 00:21:48.737 "write_zeroes": true, 00:21:48.737 "zcopy": false, 00:21:48.737 "get_zone_info": false, 00:21:48.737 "zone_management": false, 00:21:48.737 "zone_append": false, 00:21:48.737 "compare": false, 00:21:48.737 "compare_and_write": false, 00:21:48.737 "abort": false, 00:21:48.737 "seek_hole": true, 00:21:48.737 "seek_data": true, 00:21:48.737 "copy": false, 00:21:48.737 "nvme_iov_md": false 00:21:48.737 }, 00:21:48.737 "driver_specific": { 00:21:48.737 "lvol": { 00:21:48.737 "lvol_store_uuid": "6fc340ec-97ca-4c1c-b204-cf9da6b00e2e", 00:21:48.737 "base_bdev": "nvme0n1", 00:21:48.737 "thin_provision": true, 00:21:48.737 "num_allocated_clusters": 0, 00:21:48.737 "snapshot": false, 00:21:48.737 "clone": false, 00:21:48.737 "esnap_clone": false 00:21:48.737 } 00:21:48.737 } 00:21:48.737 } 00:21:48.737 ]' 00:21:48.737 06:08:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:48.995 06:08:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:48.995 06:08:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:48.995 06:08:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:48.995 06:08:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:48.995 06:08:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:21:48.995 06:08:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:21:48.995 06:08:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:49.254 06:08:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:21:49.254 06:08:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 41cffdd4-78a6-4838-9119-4f3bd0599697 00:21:49.254 06:08:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=41cffdd4-78a6-4838-9119-4f3bd0599697 00:21:49.254 06:08:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:49.254 06:08:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:49.254 06:08:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:49.254 06:08:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 41cffdd4-78a6-4838-9119-4f3bd0599697 00:21:49.254 06:08:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:49.254 { 00:21:49.254 "name": "41cffdd4-78a6-4838-9119-4f3bd0599697", 00:21:49.254 "aliases": [ 00:21:49.254 "lvs/nvme0n1p0" 00:21:49.254 ], 00:21:49.254 "product_name": "Logical Volume", 00:21:49.254 "block_size": 4096, 00:21:49.254 "num_blocks": 26476544, 00:21:49.254 "uuid": "41cffdd4-78a6-4838-9119-4f3bd0599697", 00:21:49.254 "assigned_rate_limits": { 00:21:49.254 "rw_ios_per_sec": 0, 00:21:49.254 "rw_mbytes_per_sec": 0, 00:21:49.254 "r_mbytes_per_sec": 0, 00:21:49.254 "w_mbytes_per_sec": 0 00:21:49.254 }, 00:21:49.254 "claimed": false, 00:21:49.254 "zoned": false, 00:21:49.254 "supported_io_types": { 00:21:49.254 "read": true, 00:21:49.254 "write": true, 00:21:49.254 "unmap": true, 00:21:49.254 "flush": false, 00:21:49.254 "reset": true, 00:21:49.254 "nvme_admin": false, 00:21:49.254 "nvme_io": false, 00:21:49.254 "nvme_io_md": false, 00:21:49.254 "write_zeroes": true, 00:21:49.254 "zcopy": false, 00:21:49.254 "get_zone_info": false, 00:21:49.254 "zone_management": false, 00:21:49.254 "zone_append": false, 00:21:49.254 "compare": false, 00:21:49.254 "compare_and_write": false, 00:21:49.254 "abort": false, 00:21:49.254 "seek_hole": true, 00:21:49.254 "seek_data": true, 00:21:49.254 "copy": false, 00:21:49.254 "nvme_iov_md": false 00:21:49.254 }, 00:21:49.254 "driver_specific": { 00:21:49.254 "lvol": { 00:21:49.254 "lvol_store_uuid": "6fc340ec-97ca-4c1c-b204-cf9da6b00e2e", 00:21:49.254 "base_bdev": "nvme0n1", 00:21:49.254 "thin_provision": true, 00:21:49.254 "num_allocated_clusters": 0, 00:21:49.254 "snapshot": false, 00:21:49.254 "clone": false, 00:21:49.254 "esnap_clone": false 00:21:49.254 } 00:21:49.254 } 00:21:49.254 } 00:21:49.254 ]' 00:21:49.254 06:08:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:49.513 06:08:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:49.513 06:08:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:49.513 06:08:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:49.513 06:08:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:49.513 06:08:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:21:49.513 06:08:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:21:49.513 06:08:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 41cffdd4-78a6-4838-9119-4f3bd0599697 --l2p_dram_limit 10' 00:21:49.513 06:08:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:21:49.513 06:08:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:21:49.513 06:08:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:49.513 06:08:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 41cffdd4-78a6-4838-9119-4f3bd0599697 --l2p_dram_limit 10 -c nvc0n1p0 00:21:49.773 [2024-07-13 06:08:41.289916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.773 [2024-07-13 06:08:41.289988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:49.773 [2024-07-13 06:08:41.290022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:49.773 [2024-07-13 06:08:41.290035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.773 [2024-07-13 06:08:41.290111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.773 [2024-07-13 06:08:41.290160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:49.773 [2024-07-13 06:08:41.290194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:21:49.773 [2024-07-13 06:08:41.290207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.773 [2024-07-13 06:08:41.290267] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:49.773 [2024-07-13 06:08:41.290622] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:49.773 [2024-07-13 06:08:41.290665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.773 [2024-07-13 06:08:41.290680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:49.773 [2024-07-13 06:08:41.290697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:21:49.773 [2024-07-13 06:08:41.290709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.773 [2024-07-13 06:08:41.290879] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 660d0a3d-086e-4cc6-b9c2-1269021638fc 00:21:49.773 [2024-07-13 06:08:41.291954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.773 [2024-07-13 06:08:41.291998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:49.773 [2024-07-13 06:08:41.292017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:49.773 [2024-07-13 06:08:41.292041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.773 [2024-07-13 06:08:41.296773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.773 [2024-07-13 06:08:41.296838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:49.773 [2024-07-13 06:08:41.296855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.670 ms 00:21:49.773 [2024-07-13 06:08:41.296870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.773 [2024-07-13 06:08:41.296973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.773 [2024-07-13 06:08:41.297000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:49.774 [2024-07-13 06:08:41.297014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:49.774 [2024-07-13 06:08:41.297029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.774 [2024-07-13 06:08:41.297112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.774 [2024-07-13 06:08:41.297136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:49.774 [2024-07-13 06:08:41.297166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:49.774 [2024-07-13 06:08:41.297194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.774 [2024-07-13 06:08:41.297231] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:49.774 [2024-07-13 06:08:41.298820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.774 [2024-07-13 06:08:41.298871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:49.774 [2024-07-13 06:08:41.298890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.595 ms 00:21:49.774 [2024-07-13 06:08:41.298904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.774 [2024-07-13 06:08:41.298953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.774 [2024-07-13 06:08:41.298971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:49.774 [2024-07-13 06:08:41.298986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:49.774 [2024-07-13 06:08:41.298999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.774 [2024-07-13 06:08:41.299049] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:49.774 [2024-07-13 06:08:41.299262] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:49.774 [2024-07-13 06:08:41.299288] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:49.774 [2024-07-13 06:08:41.299305] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:49.774 [2024-07-13 06:08:41.299325] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:49.774 [2024-07-13 06:08:41.299341] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:49.774 [2024-07-13 06:08:41.299356] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:49.774 [2024-07-13 06:08:41.299370] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:49.774 [2024-07-13 06:08:41.299384] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:49.774 [2024-07-13 06:08:41.299396] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:49.774 [2024-07-13 06:08:41.299411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.774 [2024-07-13 06:08:41.299432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:49.774 [2024-07-13 06:08:41.299448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:21:49.774 [2024-07-13 06:08:41.299460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.774 [2024-07-13 06:08:41.299560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.774 [2024-07-13 06:08:41.299577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:49.774 [2024-07-13 06:08:41.299595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:21:49.774 [2024-07-13 06:08:41.299616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.774 [2024-07-13 06:08:41.299730] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:49.774 [2024-07-13 06:08:41.299748] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:49.774 [2024-07-13 06:08:41.299764] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:49.774 [2024-07-13 06:08:41.299777] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:49.774 [2024-07-13 06:08:41.299792] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:49.774 [2024-07-13 06:08:41.299804] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:49.774 [2024-07-13 06:08:41.299820] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:49.774 [2024-07-13 06:08:41.299832] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:49.774 [2024-07-13 06:08:41.299846] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:49.774 [2024-07-13 06:08:41.299858] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:49.774 [2024-07-13 06:08:41.299872] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:49.774 [2024-07-13 06:08:41.299883] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:49.774 [2024-07-13 06:08:41.299897] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:49.774 [2024-07-13 06:08:41.299909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:49.774 [2024-07-13 06:08:41.299926] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:49.774 [2024-07-13 06:08:41.299938] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:49.774 [2024-07-13 06:08:41.299951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:49.774 [2024-07-13 06:08:41.299963] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:49.774 [2024-07-13 06:08:41.299977] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:49.774 [2024-07-13 06:08:41.299989] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:49.774 [2024-07-13 06:08:41.300002] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:49.774 [2024-07-13 06:08:41.300014] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:49.774 [2024-07-13 06:08:41.300028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:49.774 [2024-07-13 06:08:41.300039] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:49.774 [2024-07-13 06:08:41.300053] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:49.774 [2024-07-13 06:08:41.300065] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:49.774 [2024-07-13 06:08:41.300078] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:49.774 [2024-07-13 06:08:41.300090] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:49.774 [2024-07-13 06:08:41.300103] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:49.774 [2024-07-13 06:08:41.300115] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:49.774 [2024-07-13 06:08:41.300143] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:49.774 [2024-07-13 06:08:41.300158] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:49.774 [2024-07-13 06:08:41.300175] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:49.774 [2024-07-13 06:08:41.300187] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:49.774 [2024-07-13 06:08:41.300202] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:49.774 [2024-07-13 06:08:41.300213] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:49.774 [2024-07-13 06:08:41.300227] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:49.774 [2024-07-13 06:08:41.300239] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:49.774 [2024-07-13 06:08:41.300253] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:49.774 [2024-07-13 06:08:41.300264] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:49.774 [2024-07-13 06:08:41.300278] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:49.774 [2024-07-13 06:08:41.300289] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:49.774 [2024-07-13 06:08:41.300302] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:49.774 [2024-07-13 06:08:41.300314] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:49.774 [2024-07-13 06:08:41.300328] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:49.774 [2024-07-13 06:08:41.300343] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:49.774 [2024-07-13 06:08:41.300361] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:49.774 [2024-07-13 06:08:41.300374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:49.774 [2024-07-13 06:08:41.300387] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:49.774 [2024-07-13 06:08:41.300399] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:49.774 [2024-07-13 06:08:41.300416] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:49.774 [2024-07-13 06:08:41.300427] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:49.774 [2024-07-13 06:08:41.300441] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:49.774 [2024-07-13 06:08:41.300469] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:49.774 [2024-07-13 06:08:41.300491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:49.774 [2024-07-13 06:08:41.300504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:49.774 [2024-07-13 06:08:41.300519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:49.774 [2024-07-13 06:08:41.300531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:49.774 [2024-07-13 06:08:41.300545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:49.774 [2024-07-13 06:08:41.300557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:49.774 [2024-07-13 06:08:41.300571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:49.774 [2024-07-13 06:08:41.300583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:49.774 [2024-07-13 06:08:41.300600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:49.774 [2024-07-13 06:08:41.300612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:49.774 [2024-07-13 06:08:41.300626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:49.774 [2024-07-13 06:08:41.300638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:49.774 [2024-07-13 06:08:41.300654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:49.774 [2024-07-13 06:08:41.300666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:49.774 [2024-07-13 06:08:41.300680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:49.774 [2024-07-13 06:08:41.300692] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:49.774 [2024-07-13 06:08:41.300708] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:49.775 [2024-07-13 06:08:41.300721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:49.775 [2024-07-13 06:08:41.300735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:49.775 [2024-07-13 06:08:41.300748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:49.775 [2024-07-13 06:08:41.300762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:49.775 [2024-07-13 06:08:41.300776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:49.775 [2024-07-13 06:08:41.300790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:49.775 [2024-07-13 06:08:41.300803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.116 ms 00:21:49.775 [2024-07-13 06:08:41.300820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:49.775 [2024-07-13 06:08:41.300878] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:49.775 [2024-07-13 06:08:41.300901] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:51.676 [2024-07-13 06:08:43.265776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.676 [2024-07-13 06:08:43.265863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:51.676 [2024-07-13 06:08:43.265901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1964.910 ms 00:21:51.676 [2024-07-13 06:08:43.265915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.676 [2024-07-13 06:08:43.273398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.676 [2024-07-13 06:08:43.273459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:51.676 [2024-07-13 06:08:43.273479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.398 ms 00:21:51.676 [2024-07-13 06:08:43.273495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.676 [2024-07-13 06:08:43.273676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.676 [2024-07-13 06:08:43.273714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:51.676 [2024-07-13 06:08:43.273728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:51.676 [2024-07-13 06:08:43.273749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.676 [2024-07-13 06:08:43.281797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.676 [2024-07-13 06:08:43.281867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:51.676 [2024-07-13 06:08:43.281907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.954 ms 00:21:51.676 [2024-07-13 06:08:43.281921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.676 [2024-07-13 06:08:43.281981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.676 [2024-07-13 06:08:43.282002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:51.676 [2024-07-13 06:08:43.282015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:51.676 [2024-07-13 06:08:43.282029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.676 [2024-07-13 06:08:43.282399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.676 [2024-07-13 06:08:43.282431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:51.676 [2024-07-13 06:08:43.282446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:21:51.676 [2024-07-13 06:08:43.282461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.676 [2024-07-13 06:08:43.282644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.676 [2024-07-13 06:08:43.282671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:51.676 [2024-07-13 06:08:43.282684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:21:51.676 [2024-07-13 06:08:43.282698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.676 [2024-07-13 06:08:43.288288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.676 [2024-07-13 06:08:43.288363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:51.676 [2024-07-13 06:08:43.288396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.564 ms 00:21:51.676 [2024-07-13 06:08:43.288420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.676 [2024-07-13 06:08:43.297082] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:51.676 [2024-07-13 06:08:43.299941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.676 [2024-07-13 06:08:43.299989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:51.676 [2024-07-13 06:08:43.300025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.419 ms 00:21:51.676 [2024-07-13 06:08:43.300037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.676 [2024-07-13 06:08:43.350216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.676 [2024-07-13 06:08:43.350298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:51.676 [2024-07-13 06:08:43.350342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.124 ms 00:21:51.676 [2024-07-13 06:08:43.350356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.676 [2024-07-13 06:08:43.350626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.676 [2024-07-13 06:08:43.350655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:51.676 [2024-07-13 06:08:43.350673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:21:51.676 [2024-07-13 06:08:43.350686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.676 [2024-07-13 06:08:43.354342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.676 [2024-07-13 06:08:43.354383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:51.676 [2024-07-13 06:08:43.354418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.623 ms 00:21:51.676 [2024-07-13 06:08:43.354444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.676 [2024-07-13 06:08:43.357450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.676 [2024-07-13 06:08:43.357522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:51.676 [2024-07-13 06:08:43.357573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.955 ms 00:21:51.676 [2024-07-13 06:08:43.357585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.676 [2024-07-13 06:08:43.357964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.676 [2024-07-13 06:08:43.357993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:51.676 [2024-07-13 06:08:43.358010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:21:51.676 [2024-07-13 06:08:43.358023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.676 [2024-07-13 06:08:43.388658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.676 [2024-07-13 06:08:43.388713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:51.676 [2024-07-13 06:08:43.388752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.597 ms 00:21:51.676 [2024-07-13 06:08:43.388768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.676 [2024-07-13 06:08:43.393041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.676 [2024-07-13 06:08:43.393079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:51.676 [2024-07-13 06:08:43.393114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.220 ms 00:21:51.676 [2024-07-13 06:08:43.393126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.676 [2024-07-13 06:08:43.396821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.676 [2024-07-13 06:08:43.396873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:51.677 [2024-07-13 06:08:43.396906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.592 ms 00:21:51.677 [2024-07-13 06:08:43.396918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.677 [2024-07-13 06:08:43.401217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.677 [2024-07-13 06:08:43.401260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:51.677 [2024-07-13 06:08:43.401282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.201 ms 00:21:51.677 [2024-07-13 06:08:43.401296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.677 [2024-07-13 06:08:43.401357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.677 [2024-07-13 06:08:43.401377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:51.677 [2024-07-13 06:08:43.401394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:51.677 [2024-07-13 06:08:43.401406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.677 [2024-07-13 06:08:43.401487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.677 [2024-07-13 06:08:43.401505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:51.677 [2024-07-13 06:08:43.401521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:51.677 [2024-07-13 06:08:43.401536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.935 [2024-07-13 06:08:43.402723] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2112.246 ms, result 0 00:21:51.935 { 00:21:51.935 "name": "ftl0", 00:21:51.935 "uuid": "660d0a3d-086e-4cc6-b9c2-1269021638fc" 00:21:51.935 } 00:21:51.935 06:08:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:21:51.935 06:08:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:51.935 06:08:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:21:51.935 06:08:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:21:51.935 06:08:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:21:52.194 /dev/nbd0 00:21:52.452 06:08:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:21:52.452 06:08:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:52.452 06:08:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # local i 00:21:52.452 06:08:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:52.452 06:08:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:52.452 06:08:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:52.452 06:08:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # break 00:21:52.452 06:08:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:52.452 06:08:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:52.452 06:08:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:21:52.452 1+0 records in 00:21:52.452 1+0 records out 00:21:52.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430026 s, 9.5 MB/s 00:21:52.452 06:08:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:52.452 06:08:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # size=4096 00:21:52.452 06:08:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:52.452 06:08:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:52.452 06:08:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # return 0 00:21:52.452 06:08:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:21:52.452 [2024-07-13 06:08:44.038815] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:21:52.452 [2024-07-13 06:08:44.039023] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93431 ] 00:21:52.711 [2024-07-13 06:08:44.187076] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.711 [2024-07-13 06:08:44.229118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.682  Copying: 178/1024 [MB] (178 MBps) Copying: 366/1024 [MB] (187 MBps) Copying: 551/1024 [MB] (185 MBps) Copying: 728/1024 [MB] (176 MBps) Copying: 899/1024 [MB] (170 MBps) Copying: 1024/1024 [MB] (average 179 MBps) 00:21:58.682 00:21:58.683 06:08:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:00.585 06:08:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:00.844 [2024-07-13 06:08:52.373082] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:00.844 [2024-07-13 06:08:52.373275] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93520 ] 00:22:00.844 [2024-07-13 06:08:52.517376] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.844 [2024-07-13 06:08:52.559688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.956  Copying: 13/1024 [MB] (13 MBps) Copying: 28/1024 [MB] (14 MBps) Copying: 43/1024 [MB] (15 MBps) Copying: 59/1024 [MB] (15 MBps) Copying: 74/1024 [MB] (15 MBps) Copying: 90/1024 [MB] (15 MBps) Copying: 105/1024 [MB] (15 MBps) Copying: 121/1024 [MB] (15 MBps) Copying: 137/1024 [MB] (16 MBps) Copying: 153/1024 [MB] (15 MBps) Copying: 169/1024 [MB] (16 MBps) Copying: 184/1024 [MB] (15 MBps) Copying: 200/1024 [MB] (15 MBps) Copying: 215/1024 [MB] (15 MBps) Copying: 231/1024 [MB] (15 MBps) Copying: 247/1024 [MB] (15 MBps) Copying: 263/1024 [MB] (15 MBps) Copying: 278/1024 [MB] (15 MBps) Copying: 294/1024 [MB] (15 MBps) Copying: 310/1024 [MB] (15 MBps) Copying: 325/1024 [MB] (15 MBps) Copying: 341/1024 [MB] (15 MBps) Copying: 357/1024 [MB] (15 MBps) Copying: 373/1024 [MB] (15 MBps) Copying: 388/1024 [MB] (15 MBps) Copying: 404/1024 [MB] (15 MBps) Copying: 419/1024 [MB] (15 MBps) Copying: 435/1024 [MB] (15 MBps) Copying: 451/1024 [MB] (15 MBps) Copying: 466/1024 [MB] (15 MBps) Copying: 482/1024 [MB] (15 MBps) Copying: 498/1024 [MB] (15 MBps) Copying: 513/1024 [MB] (15 MBps) Copying: 529/1024 [MB] (15 MBps) Copying: 544/1024 [MB] (15 MBps) Copying: 560/1024 [MB] (15 MBps) Copying: 576/1024 [MB] (15 MBps) Copying: 591/1024 [MB] (15 MBps) Copying: 607/1024 [MB] (15 MBps) Copying: 622/1024 [MB] (15 MBps) Copying: 638/1024 [MB] (15 MBps) Copying: 653/1024 [MB] (15 MBps) Copying: 668/1024 [MB] (15 MBps) Copying: 683/1024 [MB] (15 MBps) Copying: 699/1024 [MB] (15 MBps) Copying: 715/1024 [MB] (15 MBps) Copying: 730/1024 [MB] (15 MBps) Copying: 745/1024 [MB] (15 MBps) Copying: 761/1024 [MB] (15 MBps) Copying: 777/1024 [MB] (15 MBps) Copying: 792/1024 [MB] (15 MBps) Copying: 808/1024 [MB] (15 MBps) Copying: 823/1024 [MB] (15 MBps) Copying: 838/1024 [MB] (15 MBps) Copying: 854/1024 [MB] (15 MBps) Copying: 870/1024 [MB] (15 MBps) Copying: 885/1024 [MB] (15 MBps) Copying: 901/1024 [MB] (15 MBps) Copying: 917/1024 [MB] (15 MBps) Copying: 933/1024 [MB] (15 MBps) Copying: 948/1024 [MB] (15 MBps) Copying: 964/1024 [MB] (15 MBps) Copying: 980/1024 [MB] (15 MBps) Copying: 995/1024 [MB] (15 MBps) Copying: 1011/1024 [MB] (15 MBps) Copying: 1024/1024 [MB] (average 15 MBps) 00:23:06.956 00:23:06.956 06:09:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:06.956 06:09:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:07.214 06:09:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:07.476 [2024-07-13 06:09:59.109725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.476 [2024-07-13 06:09:59.109804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:07.476 [2024-07-13 06:09:59.109841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:07.476 [2024-07-13 06:09:59.109856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.476 [2024-07-13 06:09:59.109892] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:07.476 [2024-07-13 06:09:59.110392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.476 [2024-07-13 06:09:59.110422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:07.476 [2024-07-13 06:09:59.110443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.468 ms 00:23:07.476 [2024-07-13 06:09:59.110455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.476 [2024-07-13 06:09:59.112309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.476 [2024-07-13 06:09:59.112365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:07.476 [2024-07-13 06:09:59.112387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.801 ms 00:23:07.476 [2024-07-13 06:09:59.112400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.476 [2024-07-13 06:09:59.127741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.476 [2024-07-13 06:09:59.127796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:07.476 [2024-07-13 06:09:59.127831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.310 ms 00:23:07.476 [2024-07-13 06:09:59.127843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.476 [2024-07-13 06:09:59.134124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.476 [2024-07-13 06:09:59.134180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:07.476 [2024-07-13 06:09:59.134215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.234 ms 00:23:07.476 [2024-07-13 06:09:59.134227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.476 [2024-07-13 06:09:59.135660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.476 [2024-07-13 06:09:59.135727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:07.476 [2024-07-13 06:09:59.135763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.315 ms 00:23:07.476 [2024-07-13 06:09:59.135776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.476 [2024-07-13 06:09:59.139969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.476 [2024-07-13 06:09:59.140025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:07.476 [2024-07-13 06:09:59.140060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.146 ms 00:23:07.476 [2024-07-13 06:09:59.140073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.476 [2024-07-13 06:09:59.140279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.476 [2024-07-13 06:09:59.140309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:07.476 [2024-07-13 06:09:59.140328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:23:07.476 [2024-07-13 06:09:59.140341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.476 [2024-07-13 06:09:59.142205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.476 [2024-07-13 06:09:59.142297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:07.476 [2024-07-13 06:09:59.142334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.817 ms 00:23:07.476 [2024-07-13 06:09:59.142346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.476 [2024-07-13 06:09:59.143859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.476 [2024-07-13 06:09:59.143909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:07.476 [2024-07-13 06:09:59.143944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.456 ms 00:23:07.476 [2024-07-13 06:09:59.143955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.476 [2024-07-13 06:09:59.145258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.476 [2024-07-13 06:09:59.145305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:07.476 [2024-07-13 06:09:59.145324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.258 ms 00:23:07.476 [2024-07-13 06:09:59.145336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.476 [2024-07-13 06:09:59.146380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.476 [2024-07-13 06:09:59.146431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:07.476 [2024-07-13 06:09:59.146481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.965 ms 00:23:07.476 [2024-07-13 06:09:59.146493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.476 [2024-07-13 06:09:59.146552] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:07.476 [2024-07-13 06:09:59.146577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.146593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.146606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.146652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.146665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.146695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.146708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.146723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.146736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.146750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.146763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.146778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.146790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.146805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.146818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.146832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.146845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.146860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.146873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.146887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.146900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.146916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.146928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.146943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.146956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.146973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.146986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.147000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.147014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.147028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.147041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.147056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.147069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.147085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.147098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.147113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.147125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.147141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.147153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.147182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.147197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.147212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.147225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:07.476 [2024-07-13 06:09:59.147239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.147989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.148002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.148017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:07.477 [2024-07-13 06:09:59.148038] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:07.477 [2024-07-13 06:09:59.148055] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 660d0a3d-086e-4cc6-b9c2-1269021638fc 00:23:07.477 [2024-07-13 06:09:59.148068] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:07.477 [2024-07-13 06:09:59.148082] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:07.477 [2024-07-13 06:09:59.148093] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:07.477 [2024-07-13 06:09:59.148107] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:07.477 [2024-07-13 06:09:59.148119] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:07.477 [2024-07-13 06:09:59.148144] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:07.477 [2024-07-13 06:09:59.148158] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:07.477 [2024-07-13 06:09:59.148172] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:07.477 [2024-07-13 06:09:59.148183] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:07.477 [2024-07-13 06:09:59.148199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.477 [2024-07-13 06:09:59.148212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:07.477 [2024-07-13 06:09:59.148229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.648 ms 00:23:07.477 [2024-07-13 06:09:59.148242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.477 [2024-07-13 06:09:59.149666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.477 [2024-07-13 06:09:59.149695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:07.477 [2024-07-13 06:09:59.149730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.376 ms 00:23:07.477 [2024-07-13 06:09:59.149743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.477 [2024-07-13 06:09:59.149823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.477 [2024-07-13 06:09:59.149850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:07.477 [2024-07-13 06:09:59.149867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:23:07.477 [2024-07-13 06:09:59.149878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.477 [2024-07-13 06:09:59.155037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.477 [2024-07-13 06:09:59.155090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:07.477 [2024-07-13 06:09:59.155124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.477 [2024-07-13 06:09:59.155136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.477 [2024-07-13 06:09:59.155212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.477 [2024-07-13 06:09:59.155242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:07.477 [2024-07-13 06:09:59.155273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.477 [2024-07-13 06:09:59.155301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.477 [2024-07-13 06:09:59.155420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.477 [2024-07-13 06:09:59.155450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:07.477 [2024-07-13 06:09:59.155471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.477 [2024-07-13 06:09:59.155483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.477 [2024-07-13 06:09:59.155515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.477 [2024-07-13 06:09:59.155536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:07.477 [2024-07-13 06:09:59.155556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.477 [2024-07-13 06:09:59.155568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.477 [2024-07-13 06:09:59.163626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.477 [2024-07-13 06:09:59.163700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:07.477 [2024-07-13 06:09:59.163737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.477 [2024-07-13 06:09:59.163750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.477 [2024-07-13 06:09:59.170166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.477 [2024-07-13 06:09:59.170261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:07.477 [2024-07-13 06:09:59.170298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.477 [2024-07-13 06:09:59.170312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.477 [2024-07-13 06:09:59.170393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.477 [2024-07-13 06:09:59.170422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:07.478 [2024-07-13 06:09:59.170451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.478 [2024-07-13 06:09:59.170479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.478 [2024-07-13 06:09:59.170575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.478 [2024-07-13 06:09:59.170594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:07.478 [2024-07-13 06:09:59.170610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.478 [2024-07-13 06:09:59.170625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.478 [2024-07-13 06:09:59.170722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.478 [2024-07-13 06:09:59.170752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:07.478 [2024-07-13 06:09:59.170770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.478 [2024-07-13 06:09:59.170783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.478 [2024-07-13 06:09:59.170841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.478 [2024-07-13 06:09:59.170860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:07.478 [2024-07-13 06:09:59.170875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.478 [2024-07-13 06:09:59.170888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.478 [2024-07-13 06:09:59.170949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.478 [2024-07-13 06:09:59.170973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:07.478 [2024-07-13 06:09:59.170993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.478 [2024-07-13 06:09:59.171005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.478 [2024-07-13 06:09:59.171067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:07.478 [2024-07-13 06:09:59.171095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:07.478 [2024-07-13 06:09:59.171112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:07.478 [2024-07-13 06:09:59.171127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.478 [2024-07-13 06:09:59.171309] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 61.536 ms, result 0 00:23:07.478 true 00:23:07.478 06:09:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 93299 00:23:07.478 06:09:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid93299 00:23:07.478 06:09:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:07.737 [2024-07-13 06:09:59.290874] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:07.737 [2024-07-13 06:09:59.291091] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94190 ] 00:23:07.737 [2024-07-13 06:09:59.439728] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.995 [2024-07-13 06:09:59.474933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.721  Copying: 182/1024 [MB] (182 MBps) Copying: 363/1024 [MB] (181 MBps) Copying: 546/1024 [MB] (182 MBps) Copying: 737/1024 [MB] (191 MBps) Copying: 906/1024 [MB] (168 MBps) Copying: 1024/1024 [MB] (average 179 MBps) 00:23:13.721 00:23:13.721 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 93299 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:13.721 06:10:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:13.979 [2024-07-13 06:10:05.512790] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:13.979 [2024-07-13 06:10:05.512983] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94255 ] 00:23:13.979 [2024-07-13 06:10:05.658814] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.979 [2024-07-13 06:10:05.694942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.235 [2024-07-13 06:10:05.777781] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:14.235 [2024-07-13 06:10:05.777863] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:14.235 [2024-07-13 06:10:05.842753] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:14.235 [2024-07-13 06:10:05.843075] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:14.235 [2024-07-13 06:10:05.843318] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:14.494 [2024-07-13 06:10:06.085504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.494 [2024-07-13 06:10:06.085557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:14.494 [2024-07-13 06:10:06.085578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:14.494 [2024-07-13 06:10:06.085601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.494 [2024-07-13 06:10:06.085676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.494 [2024-07-13 06:10:06.085704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:14.494 [2024-07-13 06:10:06.085718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:23:14.494 [2024-07-13 06:10:06.085729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.494 [2024-07-13 06:10:06.085761] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:14.494 [2024-07-13 06:10:06.086054] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:14.494 [2024-07-13 06:10:06.086098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.494 [2024-07-13 06:10:06.086117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:14.494 [2024-07-13 06:10:06.086156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:23:14.494 [2024-07-13 06:10:06.086170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.494 [2024-07-13 06:10:06.087350] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:14.494 [2024-07-13 06:10:06.089344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.494 [2024-07-13 06:10:06.089387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:14.494 [2024-07-13 06:10:06.089417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.997 ms 00:23:14.494 [2024-07-13 06:10:06.089429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.494 [2024-07-13 06:10:06.089499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.494 [2024-07-13 06:10:06.089519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:14.494 [2024-07-13 06:10:06.089532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:14.494 [2024-07-13 06:10:06.089547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.494 [2024-07-13 06:10:06.093688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.494 [2024-07-13 06:10:06.093728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:14.494 [2024-07-13 06:10:06.093744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.065 ms 00:23:14.494 [2024-07-13 06:10:06.093756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.494 [2024-07-13 06:10:06.093868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.494 [2024-07-13 06:10:06.093891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:14.494 [2024-07-13 06:10:06.093908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:23:14.494 [2024-07-13 06:10:06.093919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.494 [2024-07-13 06:10:06.094004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.494 [2024-07-13 06:10:06.094022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:14.494 [2024-07-13 06:10:06.094034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:14.494 [2024-07-13 06:10:06.094045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.494 [2024-07-13 06:10:06.094086] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:14.494 [2024-07-13 06:10:06.095432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.494 [2024-07-13 06:10:06.095466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:14.494 [2024-07-13 06:10:06.095489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.361 ms 00:23:14.494 [2024-07-13 06:10:06.095501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.494 [2024-07-13 06:10:06.095543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.494 [2024-07-13 06:10:06.095559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:14.494 [2024-07-13 06:10:06.095571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:14.494 [2024-07-13 06:10:06.095582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.494 [2024-07-13 06:10:06.095611] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:14.494 [2024-07-13 06:10:06.095638] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:14.494 [2024-07-13 06:10:06.095700] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:14.494 [2024-07-13 06:10:06.095726] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:14.494 [2024-07-13 06:10:06.095831] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:14.494 [2024-07-13 06:10:06.095846] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:14.494 [2024-07-13 06:10:06.095862] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:14.494 [2024-07-13 06:10:06.095877] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:14.494 [2024-07-13 06:10:06.095890] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:14.494 [2024-07-13 06:10:06.095903] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:14.494 [2024-07-13 06:10:06.095914] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:14.494 [2024-07-13 06:10:06.095930] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:14.494 [2024-07-13 06:10:06.095941] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:14.494 [2024-07-13 06:10:06.095953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.494 [2024-07-13 06:10:06.095974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:14.494 [2024-07-13 06:10:06.095986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:23:14.494 [2024-07-13 06:10:06.095998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.494 [2024-07-13 06:10:06.096091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.494 [2024-07-13 06:10:06.096106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:14.494 [2024-07-13 06:10:06.096118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:14.494 [2024-07-13 06:10:06.096150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.494 [2024-07-13 06:10:06.096284] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:14.494 [2024-07-13 06:10:06.096304] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:14.494 [2024-07-13 06:10:06.096317] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:14.494 [2024-07-13 06:10:06.096329] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.494 [2024-07-13 06:10:06.096341] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:14.494 [2024-07-13 06:10:06.096352] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:14.494 [2024-07-13 06:10:06.096363] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:14.494 [2024-07-13 06:10:06.096375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:14.494 [2024-07-13 06:10:06.096386] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:14.494 [2024-07-13 06:10:06.096397] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:14.494 [2024-07-13 06:10:06.096408] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:14.494 [2024-07-13 06:10:06.096419] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:14.494 [2024-07-13 06:10:06.096438] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:14.494 [2024-07-13 06:10:06.096451] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:14.494 [2024-07-13 06:10:06.096463] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:14.494 [2024-07-13 06:10:06.096478] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.494 [2024-07-13 06:10:06.096489] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:14.495 [2024-07-13 06:10:06.096500] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:14.495 [2024-07-13 06:10:06.096510] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.495 [2024-07-13 06:10:06.096521] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:14.495 [2024-07-13 06:10:06.096532] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:14.495 [2024-07-13 06:10:06.096543] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:14.495 [2024-07-13 06:10:06.096553] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:14.495 [2024-07-13 06:10:06.096564] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:14.495 [2024-07-13 06:10:06.096575] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:14.495 [2024-07-13 06:10:06.096586] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:14.495 [2024-07-13 06:10:06.096596] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:14.495 [2024-07-13 06:10:06.096607] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:14.495 [2024-07-13 06:10:06.096626] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:14.495 [2024-07-13 06:10:06.096641] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:14.495 [2024-07-13 06:10:06.096651] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:14.495 [2024-07-13 06:10:06.096662] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:14.495 [2024-07-13 06:10:06.096673] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:14.495 [2024-07-13 06:10:06.096684] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:14.495 [2024-07-13 06:10:06.096695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:14.495 [2024-07-13 06:10:06.096706] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:14.495 [2024-07-13 06:10:06.096716] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:14.495 [2024-07-13 06:10:06.096727] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:14.495 [2024-07-13 06:10:06.096738] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:14.495 [2024-07-13 06:10:06.096749] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.495 [2024-07-13 06:10:06.096759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:14.495 [2024-07-13 06:10:06.096770] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:14.495 [2024-07-13 06:10:06.096780] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.495 [2024-07-13 06:10:06.096790] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:14.495 [2024-07-13 06:10:06.096805] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:14.495 [2024-07-13 06:10:06.096816] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:14.495 [2024-07-13 06:10:06.096828] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.495 [2024-07-13 06:10:06.096841] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:14.495 [2024-07-13 06:10:06.096852] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:14.495 [2024-07-13 06:10:06.096863] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:14.495 [2024-07-13 06:10:06.096874] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:14.495 [2024-07-13 06:10:06.096885] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:14.495 [2024-07-13 06:10:06.096896] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:14.495 [2024-07-13 06:10:06.096908] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:14.495 [2024-07-13 06:10:06.096921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:14.495 [2024-07-13 06:10:06.096946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:14.495 [2024-07-13 06:10:06.096959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:14.495 [2024-07-13 06:10:06.096970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:14.495 [2024-07-13 06:10:06.096982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:14.495 [2024-07-13 06:10:06.096994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:14.495 [2024-07-13 06:10:06.097008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:14.495 [2024-07-13 06:10:06.097020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:14.495 [2024-07-13 06:10:06.097031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:14.495 [2024-07-13 06:10:06.097043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:14.495 [2024-07-13 06:10:06.097055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:14.495 [2024-07-13 06:10:06.097066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:14.495 [2024-07-13 06:10:06.097077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:14.495 [2024-07-13 06:10:06.097100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:14.495 [2024-07-13 06:10:06.097112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:14.495 [2024-07-13 06:10:06.097124] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:14.495 [2024-07-13 06:10:06.097163] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:14.495 [2024-07-13 06:10:06.097177] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:14.495 [2024-07-13 06:10:06.097189] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:14.495 [2024-07-13 06:10:06.097201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:14.495 [2024-07-13 06:10:06.097213] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:14.495 [2024-07-13 06:10:06.097226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.495 [2024-07-13 06:10:06.097243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:14.495 [2024-07-13 06:10:06.097256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.007 ms 00:23:14.495 [2024-07-13 06:10:06.097277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.495 [2024-07-13 06:10:06.115233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.495 [2024-07-13 06:10:06.115302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:14.495 [2024-07-13 06:10:06.115330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.884 ms 00:23:14.495 [2024-07-13 06:10:06.115353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.495 [2024-07-13 06:10:06.115514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.495 [2024-07-13 06:10:06.115535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:14.495 [2024-07-13 06:10:06.115567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:23:14.495 [2024-07-13 06:10:06.115583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.495 [2024-07-13 06:10:06.125188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.495 [2024-07-13 06:10:06.125267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:14.495 [2024-07-13 06:10:06.125290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.481 ms 00:23:14.495 [2024-07-13 06:10:06.125315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.495 [2024-07-13 06:10:06.125408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.495 [2024-07-13 06:10:06.125432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:14.495 [2024-07-13 06:10:06.125450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:14.495 [2024-07-13 06:10:06.125465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.495 [2024-07-13 06:10:06.125848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.495 [2024-07-13 06:10:06.125867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:14.495 [2024-07-13 06:10:06.125886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:23:14.495 [2024-07-13 06:10:06.125898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.495 [2024-07-13 06:10:06.126089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.495 [2024-07-13 06:10:06.126108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:14.495 [2024-07-13 06:10:06.126121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:23:14.495 [2024-07-13 06:10:06.126132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.495 [2024-07-13 06:10:06.131196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.495 [2024-07-13 06:10:06.131252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:14.495 [2024-07-13 06:10:06.131271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.989 ms 00:23:14.495 [2024-07-13 06:10:06.131283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.495 [2024-07-13 06:10:06.133575] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:14.495 [2024-07-13 06:10:06.133622] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:14.495 [2024-07-13 06:10:06.133651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.495 [2024-07-13 06:10:06.133668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:14.495 [2024-07-13 06:10:06.133682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.240 ms 00:23:14.495 [2024-07-13 06:10:06.133694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.495 [2024-07-13 06:10:06.150589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.495 [2024-07-13 06:10:06.150661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:14.495 [2024-07-13 06:10:06.150681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.835 ms 00:23:14.495 [2024-07-13 06:10:06.150694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.495 [2024-07-13 06:10:06.152917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.495 [2024-07-13 06:10:06.152954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:14.495 [2024-07-13 06:10:06.152970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.106 ms 00:23:14.495 [2024-07-13 06:10:06.152983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.495 [2024-07-13 06:10:06.154574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.495 [2024-07-13 06:10:06.154608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:14.495 [2024-07-13 06:10:06.154623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.547 ms 00:23:14.495 [2024-07-13 06:10:06.154634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.495 [2024-07-13 06:10:06.155028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.495 [2024-07-13 06:10:06.155054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:14.495 [2024-07-13 06:10:06.155085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:23:14.495 [2024-07-13 06:10:06.155098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.495 [2024-07-13 06:10:06.171384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.495 [2024-07-13 06:10:06.171457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:14.495 [2024-07-13 06:10:06.171478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.260 ms 00:23:14.495 [2024-07-13 06:10:06.171491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.495 [2024-07-13 06:10:06.180397] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:14.495 [2024-07-13 06:10:06.183451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.495 [2024-07-13 06:10:06.183492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:14.495 [2024-07-13 06:10:06.183512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.853 ms 00:23:14.495 [2024-07-13 06:10:06.183526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.495 [2024-07-13 06:10:06.183625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.495 [2024-07-13 06:10:06.183651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:14.495 [2024-07-13 06:10:06.183664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:14.495 [2024-07-13 06:10:06.183689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.495 [2024-07-13 06:10:06.183792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.495 [2024-07-13 06:10:06.183811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:14.495 [2024-07-13 06:10:06.183829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:14.495 [2024-07-13 06:10:06.183840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.495 [2024-07-13 06:10:06.183872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.495 [2024-07-13 06:10:06.183908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:14.495 [2024-07-13 06:10:06.183927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:14.495 [2024-07-13 06:10:06.183939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.495 [2024-07-13 06:10:06.183991] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:14.495 [2024-07-13 06:10:06.184007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.495 [2024-07-13 06:10:06.184025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:14.495 [2024-07-13 06:10:06.184046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:14.495 [2024-07-13 06:10:06.184074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.495 [2024-07-13 06:10:06.187440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.495 [2024-07-13 06:10:06.187481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:14.495 [2024-07-13 06:10:06.187499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.326 ms 00:23:14.495 [2024-07-13 06:10:06.187520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.495 [2024-07-13 06:10:06.187602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.495 [2024-07-13 06:10:06.187631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:14.495 [2024-07-13 06:10:06.187653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:14.495 [2024-07-13 06:10:06.187665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.495 [2024-07-13 06:10:06.188868] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 102.838 ms, result 0 00:23:55.722  Copying: 28/1024 [MB] (28 MBps) Copying: 56/1024 [MB] (27 MBps) Copying: 84/1024 [MB] (27 MBps) Copying: 111/1024 [MB] (27 MBps) Copying: 137/1024 [MB] (26 MBps) Copying: 165/1024 [MB] (27 MBps) Copying: 193/1024 [MB] (28 MBps) Copying: 219/1024 [MB] (25 MBps) Copying: 245/1024 [MB] (26 MBps) Copying: 270/1024 [MB] (25 MBps) Copying: 296/1024 [MB] (25 MBps) Copying: 321/1024 [MB] (24 MBps) Copying: 347/1024 [MB] (26 MBps) Copying: 371/1024 [MB] (24 MBps) Copying: 394/1024 [MB] (23 MBps) Copying: 420/1024 [MB] (25 MBps) Copying: 445/1024 [MB] (25 MBps) Copying: 471/1024 [MB] (25 MBps) Copying: 496/1024 [MB] (25 MBps) Copying: 521/1024 [MB] (24 MBps) Copying: 546/1024 [MB] (25 MBps) Copying: 571/1024 [MB] (24 MBps) Copying: 596/1024 [MB] (25 MBps) Copying: 621/1024 [MB] (25 MBps) Copying: 645/1024 [MB] (23 MBps) Copying: 671/1024 [MB] (25 MBps) Copying: 696/1024 [MB] (25 MBps) Copying: 722/1024 [MB] (26 MBps) Copying: 748/1024 [MB] (25 MBps) Copying: 774/1024 [MB] (26 MBps) Copying: 799/1024 [MB] (25 MBps) Copying: 826/1024 [MB] (26 MBps) Copying: 851/1024 [MB] (25 MBps) Copying: 876/1024 [MB] (24 MBps) Copying: 902/1024 [MB] (26 MBps) Copying: 929/1024 [MB] (26 MBps) Copying: 956/1024 [MB] (27 MBps) Copying: 981/1024 [MB] (24 MBps) Copying: 1006/1024 [MB] (24 MBps) Copying: 1023/1024 [MB] (17 MBps) Copying: 1048524/1048576 [kB] (712 kBps) Copying: 1024/1024 [MB] (average 24 MBps)[2024-07-13 06:10:47.300756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.722 [2024-07-13 06:10:47.300822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:55.722 [2024-07-13 06:10:47.300878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:55.722 [2024-07-13 06:10:47.300914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.722 [2024-07-13 06:10:47.305069] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:55.722 [2024-07-13 06:10:47.309220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.722 [2024-07-13 06:10:47.309262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:55.722 [2024-07-13 06:10:47.309293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.052 ms 00:23:55.722 [2024-07-13 06:10:47.309319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.722 [2024-07-13 06:10:47.322747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.722 [2024-07-13 06:10:47.322803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:55.722 [2024-07-13 06:10:47.322837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.202 ms 00:23:55.722 [2024-07-13 06:10:47.322856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.722 [2024-07-13 06:10:47.348351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.722 [2024-07-13 06:10:47.348418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:55.722 [2024-07-13 06:10:47.348451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.473 ms 00:23:55.722 [2024-07-13 06:10:47.348462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.722 [2024-07-13 06:10:47.355616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.722 [2024-07-13 06:10:47.355649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:55.722 [2024-07-13 06:10:47.355680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.115 ms 00:23:55.722 [2024-07-13 06:10:47.355691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.722 [2024-07-13 06:10:47.357296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.722 [2024-07-13 06:10:47.357335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:55.722 [2024-07-13 06:10:47.357350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.535 ms 00:23:55.722 [2024-07-13 06:10:47.357361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.722 [2024-07-13 06:10:47.360902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.722 [2024-07-13 06:10:47.360955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:55.722 [2024-07-13 06:10:47.360970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.502 ms 00:23:55.722 [2024-07-13 06:10:47.360982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.981 [2024-07-13 06:10:47.474035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.981 [2024-07-13 06:10:47.474092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:55.981 [2024-07-13 06:10:47.474113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 113.013 ms 00:23:55.981 [2024-07-13 06:10:47.474125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.981 [2024-07-13 06:10:47.476217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.981 [2024-07-13 06:10:47.476297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:55.981 [2024-07-13 06:10:47.476329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.013 ms 00:23:55.981 [2024-07-13 06:10:47.476340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.981 [2024-07-13 06:10:47.477768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.981 [2024-07-13 06:10:47.477818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:55.981 [2024-07-13 06:10:47.477849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.390 ms 00:23:55.981 [2024-07-13 06:10:47.477859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.981 [2024-07-13 06:10:47.479273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.981 [2024-07-13 06:10:47.479344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:55.981 [2024-07-13 06:10:47.479359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.378 ms 00:23:55.981 [2024-07-13 06:10:47.479370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.981 [2024-07-13 06:10:47.480740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.981 [2024-07-13 06:10:47.480796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:55.981 [2024-07-13 06:10:47.480811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.305 ms 00:23:55.981 [2024-07-13 06:10:47.480823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.981 [2024-07-13 06:10:47.480905] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:55.981 [2024-07-13 06:10:47.480942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130560 / 261120 wr_cnt: 1 state: open 00:23:55.981 [2024-07-13 06:10:47.480956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:55.981 [2024-07-13 06:10:47.480967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:55.981 [2024-07-13 06:10:47.480979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:55.981 [2024-07-13 06:10:47.481005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:55.981 [2024-07-13 06:10:47.481033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:55.981 [2024-07-13 06:10:47.481061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:55.981 [2024-07-13 06:10:47.481073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:55.981 [2024-07-13 06:10:47.481085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:55.981 [2024-07-13 06:10:47.481097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:55.981 [2024-07-13 06:10:47.481129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:55.981 [2024-07-13 06:10:47.481171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:55.981 [2024-07-13 06:10:47.481195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:55.981 [2024-07-13 06:10:47.481219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:55.981 [2024-07-13 06:10:47.481243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:55.981 [2024-07-13 06:10:47.481266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:55.981 [2024-07-13 06:10:47.481289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:55.981 [2024-07-13 06:10:47.481310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:55.981 [2024-07-13 06:10:47.481323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:55.981 [2024-07-13 06:10:47.481335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:55.981 [2024-07-13 06:10:47.481348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:55.981 [2024-07-13 06:10:47.481360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:55.981 [2024-07-13 06:10:47.481371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:55.981 [2024-07-13 06:10:47.481386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:55.981 [2024-07-13 06:10:47.481408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.481432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.481455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.481481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.481504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.481527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.481548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.481565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.481577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.481589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.481601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.481619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.481642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.481666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.481702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.481753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.481776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.481799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.481820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.481841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.481862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.481885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.481908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.481931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.481955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.482994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.483006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.483017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.483029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.483040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.483056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.483078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.483101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.483127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:55.982 [2024-07-13 06:10:47.483158] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:55.982 [2024-07-13 06:10:47.483240] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 660d0a3d-086e-4cc6-b9c2-1269021638fc 00:23:55.982 [2024-07-13 06:10:47.483282] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130560 00:23:55.982 [2024-07-13 06:10:47.483303] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 131520 00:23:55.982 [2024-07-13 06:10:47.483319] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130560 00:23:55.982 [2024-07-13 06:10:47.483333] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:23:55.982 [2024-07-13 06:10:47.483344] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:55.982 [2024-07-13 06:10:47.483355] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:55.982 [2024-07-13 06:10:47.483366] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:55.982 [2024-07-13 06:10:47.483377] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:55.982 [2024-07-13 06:10:47.483389] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:55.982 [2024-07-13 06:10:47.483408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.982 [2024-07-13 06:10:47.483441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:55.982 [2024-07-13 06:10:47.483466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.519 ms 00:23:55.982 [2024-07-13 06:10:47.483488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.982 [2024-07-13 06:10:47.485148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.982 [2024-07-13 06:10:47.485180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:55.982 [2024-07-13 06:10:47.485195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.606 ms 00:23:55.982 [2024-07-13 06:10:47.485207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.982 [2024-07-13 06:10:47.485304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.982 [2024-07-13 06:10:47.485321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:55.982 [2024-07-13 06:10:47.485334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:23:55.982 [2024-07-13 06:10:47.485345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.982 [2024-07-13 06:10:47.490474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.982 [2024-07-13 06:10:47.490531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:55.983 [2024-07-13 06:10:47.490564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.983 [2024-07-13 06:10:47.490591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.983 [2024-07-13 06:10:47.490650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.983 [2024-07-13 06:10:47.490666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:55.983 [2024-07-13 06:10:47.490678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.983 [2024-07-13 06:10:47.490689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.983 [2024-07-13 06:10:47.490775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.983 [2024-07-13 06:10:47.490812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:55.983 [2024-07-13 06:10:47.490833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.983 [2024-07-13 06:10:47.490854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.983 [2024-07-13 06:10:47.490905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.983 [2024-07-13 06:10:47.490925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:55.983 [2024-07-13 06:10:47.490952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.983 [2024-07-13 06:10:47.491001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.983 [2024-07-13 06:10:47.500229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.983 [2024-07-13 06:10:47.500331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:55.983 [2024-07-13 06:10:47.500367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.983 [2024-07-13 06:10:47.500380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.983 [2024-07-13 06:10:47.507759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.983 [2024-07-13 06:10:47.507827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:55.983 [2024-07-13 06:10:47.507860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.983 [2024-07-13 06:10:47.507872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.983 [2024-07-13 06:10:47.507910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.983 [2024-07-13 06:10:47.507969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:55.983 [2024-07-13 06:10:47.507997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.983 [2024-07-13 06:10:47.508023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.983 [2024-07-13 06:10:47.508110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.983 [2024-07-13 06:10:47.508140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:55.983 [2024-07-13 06:10:47.508159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.983 [2024-07-13 06:10:47.508180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.983 [2024-07-13 06:10:47.508364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.983 [2024-07-13 06:10:47.508435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:55.983 [2024-07-13 06:10:47.508462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.983 [2024-07-13 06:10:47.508500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.983 [2024-07-13 06:10:47.508588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.983 [2024-07-13 06:10:47.508619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:55.983 [2024-07-13 06:10:47.508669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.983 [2024-07-13 06:10:47.508692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.983 [2024-07-13 06:10:47.508760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.983 [2024-07-13 06:10:47.508788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:55.983 [2024-07-13 06:10:47.508825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.983 [2024-07-13 06:10:47.508838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.983 [2024-07-13 06:10:47.508915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:55.983 [2024-07-13 06:10:47.508954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:55.983 [2024-07-13 06:10:47.508976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:55.983 [2024-07-13 06:10:47.508997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.983 [2024-07-13 06:10:47.509243] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 211.019 ms, result 0 00:23:56.548 00:23:56.548 00:23:56.548 06:10:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:23:59.106 06:10:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:59.106 [2024-07-13 06:10:50.558885] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:59.107 [2024-07-13 06:10:50.559066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94702 ] 00:23:59.107 [2024-07-13 06:10:50.708743] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.107 [2024-07-13 06:10:50.756621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.373 [2024-07-13 06:10:50.857482] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:59.373 [2024-07-13 06:10:50.857643] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:59.373 [2024-07-13 06:10:51.028213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.373 [2024-07-13 06:10:51.028355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:59.373 [2024-07-13 06:10:51.028391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:59.373 [2024-07-13 06:10:51.028402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.373 [2024-07-13 06:10:51.028476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.373 [2024-07-13 06:10:51.028492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:59.373 [2024-07-13 06:10:51.028508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:59.373 [2024-07-13 06:10:51.028517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.373 [2024-07-13 06:10:51.028595] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:59.373 [2024-07-13 06:10:51.028972] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:59.373 [2024-07-13 06:10:51.029023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.373 [2024-07-13 06:10:51.029039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:59.373 [2024-07-13 06:10:51.029052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:23:59.373 [2024-07-13 06:10:51.029063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.373 [2024-07-13 06:10:51.030608] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:59.373 [2024-07-13 06:10:51.033326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.373 [2024-07-13 06:10:51.033373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:59.373 [2024-07-13 06:10:51.033398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.719 ms 00:23:59.373 [2024-07-13 06:10:51.033410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.373 [2024-07-13 06:10:51.033513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.373 [2024-07-13 06:10:51.033532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:59.373 [2024-07-13 06:10:51.033544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:59.373 [2024-07-13 06:10:51.033570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.373 [2024-07-13 06:10:51.039927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.373 [2024-07-13 06:10:51.039984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:59.373 [2024-07-13 06:10:51.040014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.254 ms 00:23:59.373 [2024-07-13 06:10:51.040025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.373 [2024-07-13 06:10:51.040125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.373 [2024-07-13 06:10:51.040218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:59.373 [2024-07-13 06:10:51.040276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:23:59.373 [2024-07-13 06:10:51.040297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.373 [2024-07-13 06:10:51.040431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.373 [2024-07-13 06:10:51.040465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:59.373 [2024-07-13 06:10:51.040499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:23:59.373 [2024-07-13 06:10:51.040522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.373 [2024-07-13 06:10:51.040594] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:59.373 [2024-07-13 06:10:51.042193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.373 [2024-07-13 06:10:51.042263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:59.373 [2024-07-13 06:10:51.042294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.622 ms 00:23:59.373 [2024-07-13 06:10:51.042304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.373 [2024-07-13 06:10:51.042351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.373 [2024-07-13 06:10:51.042396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:59.373 [2024-07-13 06:10:51.042449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:59.373 [2024-07-13 06:10:51.042499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.373 [2024-07-13 06:10:51.042545] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:59.373 [2024-07-13 06:10:51.042604] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:59.373 [2024-07-13 06:10:51.042671] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:59.373 [2024-07-13 06:10:51.042712] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:59.373 [2024-07-13 06:10:51.042875] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:59.373 [2024-07-13 06:10:51.042912] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:59.373 [2024-07-13 06:10:51.042929] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:59.373 [2024-07-13 06:10:51.042945] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:59.373 [2024-07-13 06:10:51.042960] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:59.373 [2024-07-13 06:10:51.042972] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:59.373 [2024-07-13 06:10:51.042983] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:59.373 [2024-07-13 06:10:51.043009] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:59.373 [2024-07-13 06:10:51.043034] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:59.373 [2024-07-13 06:10:51.043045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.374 [2024-07-13 06:10:51.043056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:59.374 [2024-07-13 06:10:51.043070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.505 ms 00:23:59.374 [2024-07-13 06:10:51.043080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.374 [2024-07-13 06:10:51.043199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.374 [2024-07-13 06:10:51.043236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:59.374 [2024-07-13 06:10:51.043250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:23:59.374 [2024-07-13 06:10:51.043261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.374 [2024-07-13 06:10:51.043378] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:59.374 [2024-07-13 06:10:51.043405] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:59.374 [2024-07-13 06:10:51.043477] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:59.374 [2024-07-13 06:10:51.043506] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.374 [2024-07-13 06:10:51.043542] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:59.374 [2024-07-13 06:10:51.043562] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:59.374 [2024-07-13 06:10:51.043582] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:59.374 [2024-07-13 06:10:51.043603] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:59.374 [2024-07-13 06:10:51.043625] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:59.374 [2024-07-13 06:10:51.043644] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:59.374 [2024-07-13 06:10:51.043663] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:59.374 [2024-07-13 06:10:51.043680] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:59.374 [2024-07-13 06:10:51.043699] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:59.374 [2024-07-13 06:10:51.043712] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:59.374 [2024-07-13 06:10:51.043723] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:59.374 [2024-07-13 06:10:51.043733] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.374 [2024-07-13 06:10:51.043749] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:59.374 [2024-07-13 06:10:51.043761] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:59.374 [2024-07-13 06:10:51.043771] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.374 [2024-07-13 06:10:51.043782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:59.374 [2024-07-13 06:10:51.043796] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:59.374 [2024-07-13 06:10:51.043814] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:59.374 [2024-07-13 06:10:51.043834] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:59.374 [2024-07-13 06:10:51.043854] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:59.374 [2024-07-13 06:10:51.043875] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:59.374 [2024-07-13 06:10:51.043896] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:59.374 [2024-07-13 06:10:51.043916] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:59.374 [2024-07-13 06:10:51.043934] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:59.374 [2024-07-13 06:10:51.043953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:59.374 [2024-07-13 06:10:51.043965] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:59.374 [2024-07-13 06:10:51.043976] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:59.374 [2024-07-13 06:10:51.043986] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:59.374 [2024-07-13 06:10:51.044001] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:59.374 [2024-07-13 06:10:51.044012] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:59.374 [2024-07-13 06:10:51.044023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:59.374 [2024-07-13 06:10:51.044033] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:59.374 [2024-07-13 06:10:51.044043] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:59.374 [2024-07-13 06:10:51.044054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:59.374 [2024-07-13 06:10:51.044064] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:59.374 [2024-07-13 06:10:51.044074] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.374 [2024-07-13 06:10:51.044085] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:59.374 [2024-07-13 06:10:51.044095] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:59.374 [2024-07-13 06:10:51.044105] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.374 [2024-07-13 06:10:51.044115] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:59.374 [2024-07-13 06:10:51.044126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:59.374 [2024-07-13 06:10:51.044136] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:59.374 [2024-07-13 06:10:51.044157] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.374 [2024-07-13 06:10:51.044203] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:59.374 [2024-07-13 06:10:51.044234] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:59.374 [2024-07-13 06:10:51.044245] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:59.374 [2024-07-13 06:10:51.044255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:59.374 [2024-07-13 06:10:51.044264] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:59.374 [2024-07-13 06:10:51.044275] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:59.374 [2024-07-13 06:10:51.044286] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:59.374 [2024-07-13 06:10:51.044314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:59.374 [2024-07-13 06:10:51.044369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:59.374 [2024-07-13 06:10:51.044390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:59.374 [2024-07-13 06:10:51.044412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:59.374 [2024-07-13 06:10:51.044432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:59.374 [2024-07-13 06:10:51.044451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:59.374 [2024-07-13 06:10:51.044472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:59.374 [2024-07-13 06:10:51.044494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:59.374 [2024-07-13 06:10:51.044515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:59.374 [2024-07-13 06:10:51.044536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:59.374 [2024-07-13 06:10:51.044562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:59.374 [2024-07-13 06:10:51.044582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:59.374 [2024-07-13 06:10:51.044594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:59.374 [2024-07-13 06:10:51.044605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:59.374 [2024-07-13 06:10:51.044617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:59.374 [2024-07-13 06:10:51.044628] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:59.374 [2024-07-13 06:10:51.044651] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:59.374 [2024-07-13 06:10:51.044675] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:59.374 [2024-07-13 06:10:51.044690] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:59.374 [2024-07-13 06:10:51.044702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:59.374 [2024-07-13 06:10:51.044713] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:59.374 [2024-07-13 06:10:51.044726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.374 [2024-07-13 06:10:51.044747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:59.374 [2024-07-13 06:10:51.044764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.414 ms 00:23:59.374 [2024-07-13 06:10:51.044804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.374 [2024-07-13 06:10:51.066522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.374 [2024-07-13 06:10:51.066574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:59.374 [2024-07-13 06:10:51.066608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.606 ms 00:23:59.374 [2024-07-13 06:10:51.066619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.374 [2024-07-13 06:10:51.066835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.374 [2024-07-13 06:10:51.066871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:59.374 [2024-07-13 06:10:51.066899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:23:59.374 [2024-07-13 06:10:51.066930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.374 [2024-07-13 06:10:51.077396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.374 [2024-07-13 06:10:51.077470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:59.374 [2024-07-13 06:10:51.077487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.343 ms 00:23:59.374 [2024-07-13 06:10:51.077498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.374 [2024-07-13 06:10:51.077563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.374 [2024-07-13 06:10:51.077578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:59.374 [2024-07-13 06:10:51.077597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:59.374 [2024-07-13 06:10:51.077606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.374 [2024-07-13 06:10:51.078108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.374 [2024-07-13 06:10:51.078217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:59.374 [2024-07-13 06:10:51.078233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:23:59.374 [2024-07-13 06:10:51.078243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.374 [2024-07-13 06:10:51.078457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.375 [2024-07-13 06:10:51.078510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:59.375 [2024-07-13 06:10:51.078535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:23:59.375 [2024-07-13 06:10:51.078561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.375 [2024-07-13 06:10:51.085213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.375 [2024-07-13 06:10:51.085268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:59.375 [2024-07-13 06:10:51.085285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.606 ms 00:23:59.375 [2024-07-13 06:10:51.085297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.375 [2024-07-13 06:10:51.088386] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:59.375 [2024-07-13 06:10:51.088439] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:59.375 [2024-07-13 06:10:51.088516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.375 [2024-07-13 06:10:51.088539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:59.375 [2024-07-13 06:10:51.088561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.088 ms 00:23:59.375 [2024-07-13 06:10:51.088583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.633 [2024-07-13 06:10:51.108121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.633 [2024-07-13 06:10:51.108237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:59.633 [2024-07-13 06:10:51.108286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.469 ms 00:23:59.633 [2024-07-13 06:10:51.108308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.633 [2024-07-13 06:10:51.110621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.633 [2024-07-13 06:10:51.110663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:59.633 [2024-07-13 06:10:51.110733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.222 ms 00:23:59.633 [2024-07-13 06:10:51.110785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.633 [2024-07-13 06:10:51.112864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.633 [2024-07-13 06:10:51.112923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:59.633 [2024-07-13 06:10:51.112942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.996 ms 00:23:59.633 [2024-07-13 06:10:51.112952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.633 [2024-07-13 06:10:51.113449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.633 [2024-07-13 06:10:51.113555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:59.633 [2024-07-13 06:10:51.113612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:23:59.633 [2024-07-13 06:10:51.113643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.633 [2024-07-13 06:10:51.134599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.633 [2024-07-13 06:10:51.134699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:59.633 [2024-07-13 06:10:51.134744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.912 ms 00:23:59.633 [2024-07-13 06:10:51.134757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.633 [2024-07-13 06:10:51.144175] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:59.633 [2024-07-13 06:10:51.146456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.633 [2024-07-13 06:10:51.146504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:59.633 [2024-07-13 06:10:51.146535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.637 ms 00:23:59.633 [2024-07-13 06:10:51.146558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.634 [2024-07-13 06:10:51.146647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.634 [2024-07-13 06:10:51.146664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:59.634 [2024-07-13 06:10:51.146676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:59.634 [2024-07-13 06:10:51.146685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.634 [2024-07-13 06:10:51.148522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.634 [2024-07-13 06:10:51.148573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:59.634 [2024-07-13 06:10:51.148614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.733 ms 00:23:59.634 [2024-07-13 06:10:51.148625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.634 [2024-07-13 06:10:51.148660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.634 [2024-07-13 06:10:51.148675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:59.634 [2024-07-13 06:10:51.148685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:59.634 [2024-07-13 06:10:51.148694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.634 [2024-07-13 06:10:51.148730] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:59.634 [2024-07-13 06:10:51.148745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.634 [2024-07-13 06:10:51.148770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:59.634 [2024-07-13 06:10:51.148780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:59.634 [2024-07-13 06:10:51.148805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.634 [2024-07-13 06:10:51.152666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.634 [2024-07-13 06:10:51.152719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:59.634 [2024-07-13 06:10:51.152751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.800 ms 00:23:59.634 [2024-07-13 06:10:51.152761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.634 [2024-07-13 06:10:51.152846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.634 [2024-07-13 06:10:51.152863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:59.634 [2024-07-13 06:10:51.152880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:23:59.634 [2024-07-13 06:10:51.152891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.634 [2024-07-13 06:10:51.161803] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 130.513 ms, result 0 00:24:40.396  Copying: 744/1048576 [kB] (744 kBps) Copying: 3260/1048576 [kB] (2516 kBps) Copying: 13328/1048576 [kB] (10068 kBps) Copying: 39/1024 [MB] (26 MBps) Copying: 65/1024 [MB] (25 MBps) Copying: 91/1024 [MB] (26 MBps) Copying: 119/1024 [MB] (27 MBps) Copying: 146/1024 [MB] (27 MBps) Copying: 174/1024 [MB] (27 MBps) Copying: 202/1024 [MB] (28 MBps) Copying: 230/1024 [MB] (27 MBps) Copying: 257/1024 [MB] (27 MBps) Copying: 285/1024 [MB] (27 MBps) Copying: 313/1024 [MB] (27 MBps) Copying: 340/1024 [MB] (27 MBps) Copying: 368/1024 [MB] (27 MBps) Copying: 396/1024 [MB] (27 MBps) Copying: 423/1024 [MB] (27 MBps) Copying: 451/1024 [MB] (27 MBps) Copying: 478/1024 [MB] (27 MBps) Copying: 506/1024 [MB] (27 MBps) Copying: 533/1024 [MB] (27 MBps) Copying: 561/1024 [MB] (28 MBps) Copying: 589/1024 [MB] (28 MBps) Copying: 616/1024 [MB] (26 MBps) Copying: 643/1024 [MB] (27 MBps) Copying: 671/1024 [MB] (27 MBps) Copying: 699/1024 [MB] (27 MBps) Copying: 726/1024 [MB] (27 MBps) Copying: 754/1024 [MB] (27 MBps) Copying: 781/1024 [MB] (27 MBps) Copying: 809/1024 [MB] (28 MBps) Copying: 837/1024 [MB] (27 MBps) Copying: 865/1024 [MB] (27 MBps) Copying: 894/1024 [MB] (28 MBps) Copying: 922/1024 [MB] (28 MBps) Copying: 951/1024 [MB] (28 MBps) Copying: 979/1024 [MB] (28 MBps) Copying: 1007/1024 [MB] (28 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-13 06:11:31.835233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.396 [2024-07-13 06:11:31.835315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:40.396 [2024-07-13 06:11:31.835336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:40.396 [2024-07-13 06:11:31.835348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.396 [2024-07-13 06:11:31.835379] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:40.396 [2024-07-13 06:11:31.837313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.396 [2024-07-13 06:11:31.837355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:40.396 [2024-07-13 06:11:31.837382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.907 ms 00:24:40.396 [2024-07-13 06:11:31.837395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.396 [2024-07-13 06:11:31.837740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.396 [2024-07-13 06:11:31.837778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:40.396 [2024-07-13 06:11:31.837809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:24:40.396 [2024-07-13 06:11:31.837824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.396 [2024-07-13 06:11:31.850125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.396 [2024-07-13 06:11:31.850187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:40.396 [2024-07-13 06:11:31.850205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.275 ms 00:24:40.396 [2024-07-13 06:11:31.850217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.396 [2024-07-13 06:11:31.855967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.396 [2024-07-13 06:11:31.856011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:40.396 [2024-07-13 06:11:31.856039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.686 ms 00:24:40.396 [2024-07-13 06:11:31.856049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.396 [2024-07-13 06:11:31.857407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.396 [2024-07-13 06:11:31.857501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:40.396 [2024-07-13 06:11:31.857544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.303 ms 00:24:40.396 [2024-07-13 06:11:31.857568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.396 [2024-07-13 06:11:31.860295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.396 [2024-07-13 06:11:31.860384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:40.396 [2024-07-13 06:11:31.860413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.693 ms 00:24:40.396 [2024-07-13 06:11:31.860422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.396 [2024-07-13 06:11:31.864442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.397 [2024-07-13 06:11:31.864502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:40.397 [2024-07-13 06:11:31.864517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.996 ms 00:24:40.397 [2024-07-13 06:11:31.864527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.397 [2024-07-13 06:11:31.866296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.397 [2024-07-13 06:11:31.866371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:40.397 [2024-07-13 06:11:31.866398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.735 ms 00:24:40.397 [2024-07-13 06:11:31.866407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.397 [2024-07-13 06:11:31.867856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.397 [2024-07-13 06:11:31.867921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:40.397 [2024-07-13 06:11:31.867964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.417 ms 00:24:40.397 [2024-07-13 06:11:31.867984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.397 [2024-07-13 06:11:31.869307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.397 [2024-07-13 06:11:31.869372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:40.397 [2024-07-13 06:11:31.869385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.290 ms 00:24:40.397 [2024-07-13 06:11:31.869395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.397 [2024-07-13 06:11:31.870572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.397 [2024-07-13 06:11:31.870622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:40.397 [2024-07-13 06:11:31.870664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.089 ms 00:24:40.397 [2024-07-13 06:11:31.870673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.397 [2024-07-13 06:11:31.870705] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:40.397 [2024-07-13 06:11:31.870725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:40.397 [2024-07-13 06:11:31.870743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:24:40.397 [2024-07-13 06:11:31.870754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.870764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.870774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.870784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.870793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.870803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.870813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.870822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.870832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.870841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.870851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.870861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.870886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.870912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.870922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.870932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.870942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.870952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.870962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.870973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.870983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.870993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:40.397 [2024-07-13 06:11:31.871408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:40.398 [2024-07-13 06:11:31.871837] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:40.398 [2024-07-13 06:11:31.871857] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 660d0a3d-086e-4cc6-b9c2-1269021638fc 00:24:40.398 [2024-07-13 06:11:31.871867] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:24:40.398 [2024-07-13 06:11:31.871877] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 136384 00:24:40.398 [2024-07-13 06:11:31.871887] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 134400 00:24:40.398 [2024-07-13 06:11:31.871897] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0148 00:24:40.398 [2024-07-13 06:11:31.871917] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:40.398 [2024-07-13 06:11:31.871928] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:40.398 [2024-07-13 06:11:31.871937] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:40.398 [2024-07-13 06:11:31.871946] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:40.398 [2024-07-13 06:11:31.871955] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:40.398 [2024-07-13 06:11:31.871965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.398 [2024-07-13 06:11:31.871976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:40.398 [2024-07-13 06:11:31.871986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.262 ms 00:24:40.398 [2024-07-13 06:11:31.872000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.398 [2024-07-13 06:11:31.873249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.398 [2024-07-13 06:11:31.873289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:40.398 [2024-07-13 06:11:31.873303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.227 ms 00:24:40.398 [2024-07-13 06:11:31.873314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.398 [2024-07-13 06:11:31.873425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.398 [2024-07-13 06:11:31.873456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:40.398 [2024-07-13 06:11:31.873467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:24:40.398 [2024-07-13 06:11:31.873484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.398 [2024-07-13 06:11:31.877528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.398 [2024-07-13 06:11:31.877587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:40.398 [2024-07-13 06:11:31.877599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.398 [2024-07-13 06:11:31.877609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.398 [2024-07-13 06:11:31.877659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.398 [2024-07-13 06:11:31.877680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:40.398 [2024-07-13 06:11:31.877690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.398 [2024-07-13 06:11:31.877699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.398 [2024-07-13 06:11:31.877771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.398 [2024-07-13 06:11:31.877787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:40.398 [2024-07-13 06:11:31.877836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.398 [2024-07-13 06:11:31.877862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.398 [2024-07-13 06:11:31.877882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.398 [2024-07-13 06:11:31.877894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:40.398 [2024-07-13 06:11:31.877909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.398 [2024-07-13 06:11:31.877923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.398 [2024-07-13 06:11:31.884927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.398 [2024-07-13 06:11:31.884985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:40.399 [2024-07-13 06:11:31.885015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.399 [2024-07-13 06:11:31.885025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.399 [2024-07-13 06:11:31.890992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.399 [2024-07-13 06:11:31.891061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:40.399 [2024-07-13 06:11:31.891091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.399 [2024-07-13 06:11:31.891101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.399 [2024-07-13 06:11:31.891158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.399 [2024-07-13 06:11:31.891173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:40.399 [2024-07-13 06:11:31.891184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.399 [2024-07-13 06:11:31.891193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.399 [2024-07-13 06:11:31.891243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.399 [2024-07-13 06:11:31.891256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:40.399 [2024-07-13 06:11:31.891266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.399 [2024-07-13 06:11:31.891280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.399 [2024-07-13 06:11:31.891409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.399 [2024-07-13 06:11:31.891425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:40.399 [2024-07-13 06:11:31.891436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.399 [2024-07-13 06:11:31.891446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.399 [2024-07-13 06:11:31.891489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.399 [2024-07-13 06:11:31.891505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:40.399 [2024-07-13 06:11:31.891516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.399 [2024-07-13 06:11:31.891526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.399 [2024-07-13 06:11:31.891575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.399 [2024-07-13 06:11:31.891588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:40.399 [2024-07-13 06:11:31.891598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.399 [2024-07-13 06:11:31.891608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.399 [2024-07-13 06:11:31.891663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:40.399 [2024-07-13 06:11:31.891685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:40.399 [2024-07-13 06:11:31.891696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:40.399 [2024-07-13 06:11:31.891709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.399 [2024-07-13 06:11:31.891844] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.583 ms, result 0 00:24:40.399 00:24:40.399 00:24:40.399 06:11:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:42.299 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:42.299 06:11:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:42.299 [2024-07-13 06:11:34.024801] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:24:42.299 [2024-07-13 06:11:34.024994] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95133 ] 00:24:42.558 [2024-07-13 06:11:34.165019] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.558 [2024-07-13 06:11:34.206098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.817 [2024-07-13 06:11:34.296058] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:42.817 [2024-07-13 06:11:34.296195] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:42.817 [2024-07-13 06:11:34.454103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.817 [2024-07-13 06:11:34.454175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:42.817 [2024-07-13 06:11:34.454219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:42.817 [2024-07-13 06:11:34.454240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.817 [2024-07-13 06:11:34.454306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.817 [2024-07-13 06:11:34.454325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:42.817 [2024-07-13 06:11:34.454341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:42.817 [2024-07-13 06:11:34.454351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.817 [2024-07-13 06:11:34.454380] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:42.817 [2024-07-13 06:11:34.454652] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:42.817 [2024-07-13 06:11:34.454678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.817 [2024-07-13 06:11:34.454690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:42.817 [2024-07-13 06:11:34.454701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:24:42.817 [2024-07-13 06:11:34.454711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.817 [2024-07-13 06:11:34.455826] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:42.817 [2024-07-13 06:11:34.458010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.817 [2024-07-13 06:11:34.458045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:42.817 [2024-07-13 06:11:34.458090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.186 ms 00:24:42.817 [2024-07-13 06:11:34.458100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.817 [2024-07-13 06:11:34.458177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.817 [2024-07-13 06:11:34.458196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:42.817 [2024-07-13 06:11:34.458217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:24:42.817 [2024-07-13 06:11:34.458228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.817 [2024-07-13 06:11:34.462334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.817 [2024-07-13 06:11:34.462368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:42.817 [2024-07-13 06:11:34.462398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.032 ms 00:24:42.818 [2024-07-13 06:11:34.462408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.818 [2024-07-13 06:11:34.462501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.818 [2024-07-13 06:11:34.462519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:42.818 [2024-07-13 06:11:34.462533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:42.818 [2024-07-13 06:11:34.462544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.818 [2024-07-13 06:11:34.462611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.818 [2024-07-13 06:11:34.462636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:42.818 [2024-07-13 06:11:34.462652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:42.818 [2024-07-13 06:11:34.462662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.818 [2024-07-13 06:11:34.462697] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:42.818 [2024-07-13 06:11:34.464036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.818 [2024-07-13 06:11:34.464069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:42.818 [2024-07-13 06:11:34.464099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.351 ms 00:24:42.818 [2024-07-13 06:11:34.464109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.818 [2024-07-13 06:11:34.464192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.818 [2024-07-13 06:11:34.464213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:42.818 [2024-07-13 06:11:34.464244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:42.818 [2024-07-13 06:11:34.464258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.818 [2024-07-13 06:11:34.464295] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:42.818 [2024-07-13 06:11:34.464322] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:42.818 [2024-07-13 06:11:34.464366] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:42.818 [2024-07-13 06:11:34.464388] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:42.818 [2024-07-13 06:11:34.464519] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:42.818 [2024-07-13 06:11:34.464548] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:42.818 [2024-07-13 06:11:34.464561] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:42.818 [2024-07-13 06:11:34.464584] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:42.818 [2024-07-13 06:11:34.464597] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:42.818 [2024-07-13 06:11:34.464608] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:42.818 [2024-07-13 06:11:34.464618] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:42.818 [2024-07-13 06:11:34.464628] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:42.818 [2024-07-13 06:11:34.464638] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:42.818 [2024-07-13 06:11:34.464649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.818 [2024-07-13 06:11:34.464660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:42.818 [2024-07-13 06:11:34.464679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:24:42.818 [2024-07-13 06:11:34.464697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.818 [2024-07-13 06:11:34.464781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.818 [2024-07-13 06:11:34.464811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:42.818 [2024-07-13 06:11:34.464822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:42.818 [2024-07-13 06:11:34.464848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.818 [2024-07-13 06:11:34.464941] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:42.818 [2024-07-13 06:11:34.464957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:42.818 [2024-07-13 06:11:34.464980] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:42.818 [2024-07-13 06:11:34.465004] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:42.818 [2024-07-13 06:11:34.465015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:42.818 [2024-07-13 06:11:34.465025] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:42.818 [2024-07-13 06:11:34.465035] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:42.818 [2024-07-13 06:11:34.465045] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:42.818 [2024-07-13 06:11:34.465055] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:42.818 [2024-07-13 06:11:34.465064] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:42.818 [2024-07-13 06:11:34.465077] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:42.818 [2024-07-13 06:11:34.465088] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:42.818 [2024-07-13 06:11:34.465097] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:42.818 [2024-07-13 06:11:34.465134] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:42.818 [2024-07-13 06:11:34.465157] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:42.818 [2024-07-13 06:11:34.465170] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:42.818 [2024-07-13 06:11:34.465181] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:42.818 [2024-07-13 06:11:34.465193] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:42.818 [2024-07-13 06:11:34.465203] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:42.818 [2024-07-13 06:11:34.465228] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:42.818 [2024-07-13 06:11:34.465238] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:42.818 [2024-07-13 06:11:34.465248] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:42.818 [2024-07-13 06:11:34.465257] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:42.818 [2024-07-13 06:11:34.465267] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:42.818 [2024-07-13 06:11:34.465277] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:42.818 [2024-07-13 06:11:34.465286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:42.818 [2024-07-13 06:11:34.465319] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:42.818 [2024-07-13 06:11:34.465330] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:42.818 [2024-07-13 06:11:34.465339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:42.818 [2024-07-13 06:11:34.465349] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:42.818 [2024-07-13 06:11:34.465359] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:42.818 [2024-07-13 06:11:34.465369] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:42.818 [2024-07-13 06:11:34.465379] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:42.818 [2024-07-13 06:11:34.465389] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:42.818 [2024-07-13 06:11:34.465399] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:42.818 [2024-07-13 06:11:34.465409] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:42.818 [2024-07-13 06:11:34.465419] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:42.818 [2024-07-13 06:11:34.465429] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:42.818 [2024-07-13 06:11:34.465439] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:42.818 [2024-07-13 06:11:34.465449] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:42.818 [2024-07-13 06:11:34.465458] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:42.818 [2024-07-13 06:11:34.465468] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:42.818 [2024-07-13 06:11:34.465481] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:42.818 [2024-07-13 06:11:34.465491] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:42.818 [2024-07-13 06:11:34.465516] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:42.818 [2024-07-13 06:11:34.465526] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:42.818 [2024-07-13 06:11:34.465536] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:42.818 [2024-07-13 06:11:34.465557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:42.818 [2024-07-13 06:11:34.465568] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:42.818 [2024-07-13 06:11:34.465578] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:42.818 [2024-07-13 06:11:34.465589] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:42.818 [2024-07-13 06:11:34.465598] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:42.818 [2024-07-13 06:11:34.465608] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:42.818 [2024-07-13 06:11:34.465619] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:42.818 [2024-07-13 06:11:34.465634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:42.818 [2024-07-13 06:11:34.465646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:42.818 [2024-07-13 06:11:34.465657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:42.818 [2024-07-13 06:11:34.465667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:42.818 [2024-07-13 06:11:34.465681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:42.818 [2024-07-13 06:11:34.465693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:42.818 [2024-07-13 06:11:34.465703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:42.818 [2024-07-13 06:11:34.465728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:42.818 [2024-07-13 06:11:34.465753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:42.818 [2024-07-13 06:11:34.465764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:42.818 [2024-07-13 06:11:34.465774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:42.818 [2024-07-13 06:11:34.465784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:42.818 [2024-07-13 06:11:34.465794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:42.818 [2024-07-13 06:11:34.465805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:42.819 [2024-07-13 06:11:34.465815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:42.819 [2024-07-13 06:11:34.465825] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:42.819 [2024-07-13 06:11:34.465846] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:42.819 [2024-07-13 06:11:34.465868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:42.819 [2024-07-13 06:11:34.465879] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:42.819 [2024-07-13 06:11:34.465890] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:42.819 [2024-07-13 06:11:34.465903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:42.819 [2024-07-13 06:11:34.465916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.819 [2024-07-13 06:11:34.465935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:42.819 [2024-07-13 06:11:34.465950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.032 ms 00:24:42.819 [2024-07-13 06:11:34.465961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.819 [2024-07-13 06:11:34.484793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.819 [2024-07-13 06:11:34.484863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:42.819 [2024-07-13 06:11:34.484883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.774 ms 00:24:42.819 [2024-07-13 06:11:34.484894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.819 [2024-07-13 06:11:34.484993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.819 [2024-07-13 06:11:34.485008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:42.819 [2024-07-13 06:11:34.485019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:42.819 [2024-07-13 06:11:34.485034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.819 [2024-07-13 06:11:34.492050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.819 [2024-07-13 06:11:34.492088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:42.819 [2024-07-13 06:11:34.492119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.914 ms 00:24:42.819 [2024-07-13 06:11:34.492130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.819 [2024-07-13 06:11:34.492221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.819 [2024-07-13 06:11:34.492238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:42.819 [2024-07-13 06:11:34.492256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:42.819 [2024-07-13 06:11:34.492267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.819 [2024-07-13 06:11:34.492614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.819 [2024-07-13 06:11:34.492631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:42.819 [2024-07-13 06:11:34.492643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:24:42.819 [2024-07-13 06:11:34.492654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.819 [2024-07-13 06:11:34.492792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.819 [2024-07-13 06:11:34.492841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:42.819 [2024-07-13 06:11:34.492855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:24:42.819 [2024-07-13 06:11:34.492871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.819 [2024-07-13 06:11:34.497276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.819 [2024-07-13 06:11:34.497311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:42.819 [2024-07-13 06:11:34.497342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.371 ms 00:24:42.819 [2024-07-13 06:11:34.497353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.819 [2024-07-13 06:11:34.499545] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:42.819 [2024-07-13 06:11:34.499603] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:42.819 [2024-07-13 06:11:34.499621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.819 [2024-07-13 06:11:34.499631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:42.819 [2024-07-13 06:11:34.499642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.131 ms 00:24:42.819 [2024-07-13 06:11:34.499652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.819 [2024-07-13 06:11:34.512477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.819 [2024-07-13 06:11:34.512535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:42.819 [2024-07-13 06:11:34.512567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.784 ms 00:24:42.819 [2024-07-13 06:11:34.512577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.819 [2024-07-13 06:11:34.514236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.819 [2024-07-13 06:11:34.514296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:42.819 [2024-07-13 06:11:34.514310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.611 ms 00:24:42.819 [2024-07-13 06:11:34.514319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.819 [2024-07-13 06:11:34.515939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.819 [2024-07-13 06:11:34.515972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:42.819 [2024-07-13 06:11:34.516002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.582 ms 00:24:42.819 [2024-07-13 06:11:34.516011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.819 [2024-07-13 06:11:34.516397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.819 [2024-07-13 06:11:34.516418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:42.819 [2024-07-13 06:11:34.516430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:24:42.819 [2024-07-13 06:11:34.516445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.819 [2024-07-13 06:11:34.531123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.819 [2024-07-13 06:11:34.531198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:42.819 [2024-07-13 06:11:34.531232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.643 ms 00:24:42.819 [2024-07-13 06:11:34.531243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.819 [2024-07-13 06:11:34.540123] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:43.078 [2024-07-13 06:11:34.543079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.078 [2024-07-13 06:11:34.543171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:43.078 [2024-07-13 06:11:34.543199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.775 ms 00:24:43.078 [2024-07-13 06:11:34.543222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.078 [2024-07-13 06:11:34.543335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.078 [2024-07-13 06:11:34.543361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:43.078 [2024-07-13 06:11:34.543375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:24:43.078 [2024-07-13 06:11:34.543386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.078 [2024-07-13 06:11:34.544167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.078 [2024-07-13 06:11:34.544236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:43.078 [2024-07-13 06:11:34.544252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.716 ms 00:24:43.078 [2024-07-13 06:11:34.544279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.078 [2024-07-13 06:11:34.544314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.078 [2024-07-13 06:11:34.544330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:43.078 [2024-07-13 06:11:34.544342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:43.078 [2024-07-13 06:11:34.544353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.078 [2024-07-13 06:11:34.544422] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:43.078 [2024-07-13 06:11:34.544467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.078 [2024-07-13 06:11:34.544500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:43.078 [2024-07-13 06:11:34.544513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:24:43.078 [2024-07-13 06:11:34.544528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.078 [2024-07-13 06:11:34.548102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.078 [2024-07-13 06:11:34.548156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:43.078 [2024-07-13 06:11:34.548203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.543 ms 00:24:43.078 [2024-07-13 06:11:34.548215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.078 [2024-07-13 06:11:34.548306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.078 [2024-07-13 06:11:34.548324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:43.078 [2024-07-13 06:11:34.548350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:24:43.078 [2024-07-13 06:11:34.548369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.078 [2024-07-13 06:11:34.549777] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 95.150 ms, result 0 00:25:27.612  Copying: 25/1024 [MB] (25 MBps) Copying: 48/1024 [MB] (23 MBps) Copying: 71/1024 [MB] (22 MBps) Copying: 94/1024 [MB] (23 MBps) Copying: 118/1024 [MB] (23 MBps) Copying: 141/1024 [MB] (23 MBps) Copying: 164/1024 [MB] (23 MBps) Copying: 187/1024 [MB] (23 MBps) Copying: 210/1024 [MB] (22 MBps) Copying: 233/1024 [MB] (23 MBps) Copying: 257/1024 [MB] (23 MBps) Copying: 280/1024 [MB] (23 MBps) Copying: 303/1024 [MB] (23 MBps) Copying: 325/1024 [MB] (22 MBps) Copying: 348/1024 [MB] (22 MBps) Copying: 371/1024 [MB] (23 MBps) Copying: 394/1024 [MB] (22 MBps) Copying: 417/1024 [MB] (22 MBps) Copying: 439/1024 [MB] (22 MBps) Copying: 463/1024 [MB] (24 MBps) Copying: 486/1024 [MB] (22 MBps) Copying: 508/1024 [MB] (22 MBps) Copying: 530/1024 [MB] (22 MBps) Copying: 554/1024 [MB] (23 MBps) Copying: 577/1024 [MB] (22 MBps) Copying: 600/1024 [MB] (23 MBps) Copying: 625/1024 [MB] (24 MBps) Copying: 648/1024 [MB] (23 MBps) Copying: 672/1024 [MB] (23 MBps) Copying: 695/1024 [MB] (23 MBps) Copying: 718/1024 [MB] (22 MBps) Copying: 740/1024 [MB] (22 MBps) Copying: 764/1024 [MB] (24 MBps) Copying: 786/1024 [MB] (21 MBps) Copying: 809/1024 [MB] (22 MBps) Copying: 833/1024 [MB] (23 MBps) Copying: 856/1024 [MB] (23 MBps) Copying: 879/1024 [MB] (23 MBps) Copying: 902/1024 [MB] (22 MBps) Copying: 924/1024 [MB] (22 MBps) Copying: 947/1024 [MB] (22 MBps) Copying: 969/1024 [MB] (22 MBps) Copying: 992/1024 [MB] (22 MBps) Copying: 1015/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-13 06:12:19.287340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.612 [2024-07-13 06:12:19.287441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:27.612 [2024-07-13 06:12:19.287485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:27.612 [2024-07-13 06:12:19.287503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.612 [2024-07-13 06:12:19.287546] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:27.612 [2024-07-13 06:12:19.288017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.612 [2024-07-13 06:12:19.288056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:27.612 [2024-07-13 06:12:19.288085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:25:27.612 [2024-07-13 06:12:19.288102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.612 [2024-07-13 06:12:19.288456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.612 [2024-07-13 06:12:19.288497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:27.612 [2024-07-13 06:12:19.288520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:25:27.612 [2024-07-13 06:12:19.288536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.612 [2024-07-13 06:12:19.293655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.612 [2024-07-13 06:12:19.293700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:27.612 [2024-07-13 06:12:19.293720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.092 ms 00:25:27.612 [2024-07-13 06:12:19.293736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.612 [2024-07-13 06:12:19.301130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.612 [2024-07-13 06:12:19.301167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:27.612 [2024-07-13 06:12:19.301187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.363 ms 00:25:27.612 [2024-07-13 06:12:19.301211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.612 [2024-07-13 06:12:19.302762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.612 [2024-07-13 06:12:19.302831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:27.612 [2024-07-13 06:12:19.302861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.484 ms 00:25:27.612 [2024-07-13 06:12:19.302871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.612 [2024-07-13 06:12:19.306235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.612 [2024-07-13 06:12:19.306357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:27.612 [2024-07-13 06:12:19.306388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.341 ms 00:25:27.612 [2024-07-13 06:12:19.306399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.612 [2024-07-13 06:12:19.310149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.612 [2024-07-13 06:12:19.310203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:27.612 [2024-07-13 06:12:19.310229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.725 ms 00:25:27.612 [2024-07-13 06:12:19.310242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.612 [2024-07-13 06:12:19.312056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.612 [2024-07-13 06:12:19.312098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:27.612 [2024-07-13 06:12:19.312114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.789 ms 00:25:27.612 [2024-07-13 06:12:19.312140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.612 [2024-07-13 06:12:19.313671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.612 [2024-07-13 06:12:19.313701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:27.612 [2024-07-13 06:12:19.313730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.473 ms 00:25:27.612 [2024-07-13 06:12:19.313739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.612 [2024-07-13 06:12:19.315000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.612 [2024-07-13 06:12:19.315042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:27.613 [2024-07-13 06:12:19.315058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.241 ms 00:25:27.613 [2024-07-13 06:12:19.315069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.613 [2024-07-13 06:12:19.316363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.613 [2024-07-13 06:12:19.316427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:27.613 [2024-07-13 06:12:19.316440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.244 ms 00:25:27.613 [2024-07-13 06:12:19.316449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.613 [2024-07-13 06:12:19.316469] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:27.613 [2024-07-13 06:12:19.316486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:27.613 [2024-07-13 06:12:19.316504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:25:27.613 [2024-07-13 06:12:19.316515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.316991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:27.613 [2024-07-13 06:12:19.317676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:27.614 [2024-07-13 06:12:19.317687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:27.614 [2024-07-13 06:12:19.317699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:27.614 [2024-07-13 06:12:19.317711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:27.614 [2024-07-13 06:12:19.317722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:27.614 [2024-07-13 06:12:19.317734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:27.614 [2024-07-13 06:12:19.317759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:27.614 [2024-07-13 06:12:19.317772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:27.614 [2024-07-13 06:12:19.317783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:27.614 [2024-07-13 06:12:19.317795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:27.614 [2024-07-13 06:12:19.317829] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:27.614 [2024-07-13 06:12:19.317851] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 660d0a3d-086e-4cc6-b9c2-1269021638fc 00:25:27.614 [2024-07-13 06:12:19.317864] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:25:27.614 [2024-07-13 06:12:19.317875] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:27.614 [2024-07-13 06:12:19.317885] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:27.614 [2024-07-13 06:12:19.317897] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:27.614 [2024-07-13 06:12:19.317908] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:27.614 [2024-07-13 06:12:19.317939] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:27.614 [2024-07-13 06:12:19.317951] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:27.614 [2024-07-13 06:12:19.317962] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:27.614 [2024-07-13 06:12:19.317972] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:27.614 [2024-07-13 06:12:19.317984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.614 [2024-07-13 06:12:19.317997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:27.614 [2024-07-13 06:12:19.318009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.517 ms 00:25:27.614 [2024-07-13 06:12:19.318020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.614 [2024-07-13 06:12:19.319484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.614 [2024-07-13 06:12:19.319530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:27.614 [2024-07-13 06:12:19.319544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.426 ms 00:25:27.614 [2024-07-13 06:12:19.319558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.614 [2024-07-13 06:12:19.319630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.614 [2024-07-13 06:12:19.319644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:27.614 [2024-07-13 06:12:19.319668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:25:27.614 [2024-07-13 06:12:19.319677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.614 [2024-07-13 06:12:19.324544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.614 [2024-07-13 06:12:19.324574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:27.614 [2024-07-13 06:12:19.324593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.614 [2024-07-13 06:12:19.324603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.614 [2024-07-13 06:12:19.324655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.614 [2024-07-13 06:12:19.324670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:27.614 [2024-07-13 06:12:19.324681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.614 [2024-07-13 06:12:19.324691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.614 [2024-07-13 06:12:19.324772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.614 [2024-07-13 06:12:19.324790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:27.614 [2024-07-13 06:12:19.324801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.614 [2024-07-13 06:12:19.324816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.614 [2024-07-13 06:12:19.324837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.614 [2024-07-13 06:12:19.324849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:27.614 [2024-07-13 06:12:19.324859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.614 [2024-07-13 06:12:19.324868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.614 [2024-07-13 06:12:19.333381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.614 [2024-07-13 06:12:19.333510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:27.614 [2024-07-13 06:12:19.333551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.614 [2024-07-13 06:12:19.333563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.873 [2024-07-13 06:12:19.340983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.873 [2024-07-13 06:12:19.341054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:27.873 [2024-07-13 06:12:19.341088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.873 [2024-07-13 06:12:19.341109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.873 [2024-07-13 06:12:19.341168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.873 [2024-07-13 06:12:19.341187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:27.873 [2024-07-13 06:12:19.341200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.873 [2024-07-13 06:12:19.341212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.873 [2024-07-13 06:12:19.341294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.873 [2024-07-13 06:12:19.341316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:27.873 [2024-07-13 06:12:19.341329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.873 [2024-07-13 06:12:19.341341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.873 [2024-07-13 06:12:19.341481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.873 [2024-07-13 06:12:19.341498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:27.873 [2024-07-13 06:12:19.341510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.873 [2024-07-13 06:12:19.341520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.873 [2024-07-13 06:12:19.341614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.873 [2024-07-13 06:12:19.341662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:27.873 [2024-07-13 06:12:19.341689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.873 [2024-07-13 06:12:19.341700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.873 [2024-07-13 06:12:19.341757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.873 [2024-07-13 06:12:19.341774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:27.874 [2024-07-13 06:12:19.341785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.874 [2024-07-13 06:12:19.341796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.874 [2024-07-13 06:12:19.341850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:27.874 [2024-07-13 06:12:19.341883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:27.874 [2024-07-13 06:12:19.341905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:27.874 [2024-07-13 06:12:19.341916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.874 [2024-07-13 06:12:19.342091] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.723 ms, result 0 00:25:27.874 00:25:27.874 00:25:27.874 06:12:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:30.410 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:25:30.410 06:12:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:25:30.410 06:12:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:25:30.410 06:12:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:30.410 06:12:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:30.410 06:12:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:30.410 06:12:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:30.410 06:12:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:30.410 Process with pid 93299 is not found 00:25:30.410 06:12:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 93299 00:25:30.410 06:12:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@948 -- # '[' -z 93299 ']' 00:25:30.410 06:12:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # kill -0 93299 00:25:30.410 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (93299) - No such process 00:25:30.410 06:12:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@975 -- # echo 'Process with pid 93299 is not found' 00:25:30.410 06:12:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:25:30.668 Remove shared memory files 00:25:30.668 06:12:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:25:30.668 06:12:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:30.668 06:12:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:25:30.668 06:12:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:25:30.668 06:12:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:25:30.668 06:12:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:30.668 06:12:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:25:30.668 ************************************ 00:25:30.668 END TEST ftl_dirty_shutdown 00:25:30.668 ************************************ 00:25:30.668 00:25:30.668 real 3m45.526s 00:25:30.668 user 4m19.676s 00:25:30.668 sys 0m34.598s 00:25:30.668 06:12:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:30.668 06:12:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:30.668 06:12:22 ftl -- common/autotest_common.sh@1142 -- # return 0 00:25:30.668 06:12:22 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:30.668 06:12:22 ftl -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:25:30.668 06:12:22 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:30.668 06:12:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:30.668 ************************************ 00:25:30.668 START TEST ftl_upgrade_shutdown 00:25:30.668 ************************************ 00:25:30.668 06:12:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:30.928 * Looking for test storage... 00:25:30.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=95669 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 95669 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 95669 ']' 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:30.928 06:12:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:30.928 [2024-07-13 06:12:22.546719] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:25:30.928 [2024-07-13 06:12:22.546905] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95669 ] 00:25:31.187 [2024-07-13 06:12:22.696853] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.187 [2024-07-13 06:12:22.742592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.754 06:12:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:31.754 06:12:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:25:31.754 06:12:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:31.754 06:12:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:25:31.754 06:12:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:25:31.754 06:12:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:31.754 06:12:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:25:31.754 06:12:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:31.754 06:12:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:25:31.754 06:12:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:31.754 06:12:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:25:31.754 06:12:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:31.754 06:12:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:25:31.754 06:12:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:31.754 06:12:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:25:31.754 06:12:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:31.754 06:12:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:25:31.754 06:12:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:25:31.754 06:12:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:25:31.754 06:12:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:31.754 06:12:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:25:31.754 06:12:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:32.013 06:12:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:25:32.271 06:12:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:25:32.271 06:12:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:32.271 06:12:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:25:32.271 06:12:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:25:32.271 06:12:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:32.271 06:12:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:32.271 06:12:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:32.271 06:12:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:25:32.531 06:12:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:32.531 { 00:25:32.531 "name": "basen1", 00:25:32.531 "aliases": [ 00:25:32.531 "d8e5c80b-ac86-4c66-b4ef-702b83e06915" 00:25:32.531 ], 00:25:32.531 "product_name": "NVMe disk", 00:25:32.531 "block_size": 4096, 00:25:32.531 "num_blocks": 1310720, 00:25:32.531 "uuid": "d8e5c80b-ac86-4c66-b4ef-702b83e06915", 00:25:32.531 "assigned_rate_limits": { 00:25:32.531 "rw_ios_per_sec": 0, 00:25:32.531 "rw_mbytes_per_sec": 0, 00:25:32.531 "r_mbytes_per_sec": 0, 00:25:32.531 "w_mbytes_per_sec": 0 00:25:32.531 }, 00:25:32.531 "claimed": true, 00:25:32.531 "claim_type": "read_many_write_one", 00:25:32.531 "zoned": false, 00:25:32.531 "supported_io_types": { 00:25:32.531 "read": true, 00:25:32.531 "write": true, 00:25:32.531 "unmap": true, 00:25:32.531 "flush": true, 00:25:32.531 "reset": true, 00:25:32.531 "nvme_admin": true, 00:25:32.531 "nvme_io": true, 00:25:32.531 "nvme_io_md": false, 00:25:32.531 "write_zeroes": true, 00:25:32.531 "zcopy": false, 00:25:32.531 "get_zone_info": false, 00:25:32.531 "zone_management": false, 00:25:32.531 "zone_append": false, 00:25:32.531 "compare": true, 00:25:32.531 "compare_and_write": false, 00:25:32.531 "abort": true, 00:25:32.531 "seek_hole": false, 00:25:32.531 "seek_data": false, 00:25:32.531 "copy": true, 00:25:32.531 "nvme_iov_md": false 00:25:32.531 }, 00:25:32.531 "driver_specific": { 00:25:32.531 "nvme": [ 00:25:32.531 { 00:25:32.531 "pci_address": "0000:00:11.0", 00:25:32.531 "trid": { 00:25:32.531 "trtype": "PCIe", 00:25:32.531 "traddr": "0000:00:11.0" 00:25:32.531 }, 00:25:32.531 "ctrlr_data": { 00:25:32.531 "cntlid": 0, 00:25:32.531 "vendor_id": "0x1b36", 00:25:32.531 "model_number": "QEMU NVMe Ctrl", 00:25:32.531 "serial_number": "12341", 00:25:32.531 "firmware_revision": "8.0.0", 00:25:32.531 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:32.531 "oacs": { 00:25:32.531 "security": 0, 00:25:32.531 "format": 1, 00:25:32.531 "firmware": 0, 00:25:32.531 "ns_manage": 1 00:25:32.531 }, 00:25:32.531 "multi_ctrlr": false, 00:25:32.531 "ana_reporting": false 00:25:32.531 }, 00:25:32.531 "vs": { 00:25:32.531 "nvme_version": "1.4" 00:25:32.531 }, 00:25:32.531 "ns_data": { 00:25:32.531 "id": 1, 00:25:32.531 "can_share": false 00:25:32.531 } 00:25:32.531 } 00:25:32.531 ], 00:25:32.531 "mp_policy": "active_passive" 00:25:32.531 } 00:25:32.531 } 00:25:32.531 ]' 00:25:32.531 06:12:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:32.531 06:12:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:32.531 06:12:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:32.531 06:12:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:25:32.531 06:12:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:25:32.531 06:12:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:25:32.531 06:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:32.531 06:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:25:32.531 06:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:32.531 06:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:32.531 06:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:32.819 06:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=6fc340ec-97ca-4c1c-b204-cf9da6b00e2e 00:25:32.819 06:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:32.819 06:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6fc340ec-97ca-4c1c-b204-cf9da6b00e2e 00:25:33.089 06:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:25:33.348 06:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=45231e4b-3d82-499b-b18b-52a30b4efec1 00:25:33.348 06:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 45231e4b-3d82-499b-b18b-52a30b4efec1 00:25:33.348 06:12:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=a0dd7dee-5446-4215-9800-3628c28da873 00:25:33.348 06:12:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z a0dd7dee-5446-4215-9800-3628c28da873 ]] 00:25:33.348 06:12:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 a0dd7dee-5446-4215-9800-3628c28da873 5120 00:25:33.348 06:12:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:25:33.348 06:12:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:33.348 06:12:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=a0dd7dee-5446-4215-9800-3628c28da873 00:25:33.348 06:12:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:25:33.348 06:12:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size a0dd7dee-5446-4215-9800-3628c28da873 00:25:33.348 06:12:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=a0dd7dee-5446-4215-9800-3628c28da873 00:25:33.348 06:12:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:33.348 06:12:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:33.348 06:12:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:33.348 06:12:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a0dd7dee-5446-4215-9800-3628c28da873 00:25:33.607 06:12:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:33.607 { 00:25:33.607 "name": "a0dd7dee-5446-4215-9800-3628c28da873", 00:25:33.607 "aliases": [ 00:25:33.607 "lvs/basen1p0" 00:25:33.607 ], 00:25:33.607 "product_name": "Logical Volume", 00:25:33.607 "block_size": 4096, 00:25:33.607 "num_blocks": 5242880, 00:25:33.607 "uuid": "a0dd7dee-5446-4215-9800-3628c28da873", 00:25:33.607 "assigned_rate_limits": { 00:25:33.607 "rw_ios_per_sec": 0, 00:25:33.607 "rw_mbytes_per_sec": 0, 00:25:33.607 "r_mbytes_per_sec": 0, 00:25:33.607 "w_mbytes_per_sec": 0 00:25:33.607 }, 00:25:33.607 "claimed": false, 00:25:33.607 "zoned": false, 00:25:33.607 "supported_io_types": { 00:25:33.607 "read": true, 00:25:33.607 "write": true, 00:25:33.607 "unmap": true, 00:25:33.607 "flush": false, 00:25:33.607 "reset": true, 00:25:33.607 "nvme_admin": false, 00:25:33.607 "nvme_io": false, 00:25:33.607 "nvme_io_md": false, 00:25:33.607 "write_zeroes": true, 00:25:33.607 "zcopy": false, 00:25:33.607 "get_zone_info": false, 00:25:33.607 "zone_management": false, 00:25:33.607 "zone_append": false, 00:25:33.607 "compare": false, 00:25:33.607 "compare_and_write": false, 00:25:33.607 "abort": false, 00:25:33.607 "seek_hole": true, 00:25:33.607 "seek_data": true, 00:25:33.607 "copy": false, 00:25:33.607 "nvme_iov_md": false 00:25:33.607 }, 00:25:33.607 "driver_specific": { 00:25:33.607 "lvol": { 00:25:33.607 "lvol_store_uuid": "45231e4b-3d82-499b-b18b-52a30b4efec1", 00:25:33.607 "base_bdev": "basen1", 00:25:33.607 "thin_provision": true, 00:25:33.607 "num_allocated_clusters": 0, 00:25:33.607 "snapshot": false, 00:25:33.607 "clone": false, 00:25:33.607 "esnap_clone": false 00:25:33.607 } 00:25:33.607 } 00:25:33.607 } 00:25:33.607 ]' 00:25:33.607 06:12:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:33.607 06:12:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:33.607 06:12:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:33.866 06:12:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:25:33.866 06:12:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:25:33.866 06:12:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:25:33.866 06:12:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:25:33.866 06:12:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:33.866 06:12:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:25:34.125 06:12:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:25:34.125 06:12:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:25:34.125 06:12:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:25:34.383 06:12:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:25:34.383 06:12:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:25:34.383 06:12:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d a0dd7dee-5446-4215-9800-3628c28da873 -c cachen1p0 --l2p_dram_limit 2 00:25:34.643 [2024-07-13 06:12:26.139394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.643 [2024-07-13 06:12:26.139463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:34.643 [2024-07-13 06:12:26.139504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:34.643 [2024-07-13 06:12:26.139517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.643 [2024-07-13 06:12:26.139601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.643 [2024-07-13 06:12:26.139622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:34.643 [2024-07-13 06:12:26.139640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:25:34.643 [2024-07-13 06:12:26.139651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.643 [2024-07-13 06:12:26.139686] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:34.643 [2024-07-13 06:12:26.140048] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:34.643 [2024-07-13 06:12:26.140096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.643 [2024-07-13 06:12:26.140111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:34.643 [2024-07-13 06:12:26.140126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.419 ms 00:25:34.643 [2024-07-13 06:12:26.140170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.643 [2024-07-13 06:12:26.140322] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID abfae991-9d63-4260-a24f-7fe068285cb3 00:25:34.643 [2024-07-13 06:12:26.141488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.643 [2024-07-13 06:12:26.141577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:25:34.643 [2024-07-13 06:12:26.141596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:25:34.643 [2024-07-13 06:12:26.141640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.643 [2024-07-13 06:12:26.146296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.643 [2024-07-13 06:12:26.146378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:34.643 [2024-07-13 06:12:26.146394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.603 ms 00:25:34.643 [2024-07-13 06:12:26.146407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.643 [2024-07-13 06:12:26.146494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.643 [2024-07-13 06:12:26.146519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:34.643 [2024-07-13 06:12:26.146547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:25:34.643 [2024-07-13 06:12:26.146576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.643 [2024-07-13 06:12:26.146646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.643 [2024-07-13 06:12:26.146666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:34.643 [2024-07-13 06:12:26.146678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:25:34.643 [2024-07-13 06:12:26.146692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.643 [2024-07-13 06:12:26.146723] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:34.643 [2024-07-13 06:12:26.148190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.643 [2024-07-13 06:12:26.148253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:34.643 [2024-07-13 06:12:26.148272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.472 ms 00:25:34.643 [2024-07-13 06:12:26.148284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.643 [2024-07-13 06:12:26.148321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.643 [2024-07-13 06:12:26.148336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:34.643 [2024-07-13 06:12:26.148350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:34.643 [2024-07-13 06:12:26.148361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.643 [2024-07-13 06:12:26.148389] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:25:34.643 [2024-07-13 06:12:26.148623] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:25:34.643 [2024-07-13 06:12:26.148659] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:34.643 [2024-07-13 06:12:26.148676] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:25:34.643 [2024-07-13 06:12:26.148696] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:34.643 [2024-07-13 06:12:26.148710] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:34.643 [2024-07-13 06:12:26.148735] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:34.643 [2024-07-13 06:12:26.148749] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:34.643 [2024-07-13 06:12:26.148761] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:25:34.644 [2024-07-13 06:12:26.148772] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:25:34.644 [2024-07-13 06:12:26.148786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.644 [2024-07-13 06:12:26.148797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:34.644 [2024-07-13 06:12:26.148819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.401 ms 00:25:34.644 [2024-07-13 06:12:26.148831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.644 [2024-07-13 06:12:26.148923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.644 [2024-07-13 06:12:26.148937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:34.644 [2024-07-13 06:12:26.148953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:25:34.644 [2024-07-13 06:12:26.148965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.644 [2024-07-13 06:12:26.149072] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:34.644 [2024-07-13 06:12:26.149088] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:34.644 [2024-07-13 06:12:26.149132] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:34.644 [2024-07-13 06:12:26.149160] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:34.644 [2024-07-13 06:12:26.149180] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:34.644 [2024-07-13 06:12:26.149192] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:34.644 [2024-07-13 06:12:26.149209] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:34.644 [2024-07-13 06:12:26.149222] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:34.644 [2024-07-13 06:12:26.149238] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:34.644 [2024-07-13 06:12:26.149250] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:34.644 [2024-07-13 06:12:26.149264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:34.644 [2024-07-13 06:12:26.149276] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:34.644 [2024-07-13 06:12:26.149289] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:34.644 [2024-07-13 06:12:26.149301] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:34.644 [2024-07-13 06:12:26.149317] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:25:34.644 [2024-07-13 06:12:26.149329] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:34.644 [2024-07-13 06:12:26.149343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:34.644 [2024-07-13 06:12:26.149355] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:25:34.644 [2024-07-13 06:12:26.149369] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:34.644 [2024-07-13 06:12:26.149381] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:34.644 [2024-07-13 06:12:26.149395] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:34.644 [2024-07-13 06:12:26.149407] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:34.644 [2024-07-13 06:12:26.149421] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:34.644 [2024-07-13 06:12:26.149447] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:34.644 [2024-07-13 06:12:26.149460] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:34.644 [2024-07-13 06:12:26.149471] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:34.644 [2024-07-13 06:12:26.149485] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:34.644 [2024-07-13 06:12:26.149510] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:34.644 [2024-07-13 06:12:26.149524] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:34.644 [2024-07-13 06:12:26.149535] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:25:34.644 [2024-07-13 06:12:26.149550] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:34.644 [2024-07-13 06:12:26.149561] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:34.644 [2024-07-13 06:12:26.149576] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:25:34.644 [2024-07-13 06:12:26.149587] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:34.644 [2024-07-13 06:12:26.149600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:34.644 [2024-07-13 06:12:26.149611] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:25:34.644 [2024-07-13 06:12:26.149624] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:34.644 [2024-07-13 06:12:26.149636] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:25:34.644 [2024-07-13 06:12:26.149649] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:25:34.644 [2024-07-13 06:12:26.149660] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:34.644 [2024-07-13 06:12:26.149673] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:25:34.644 [2024-07-13 06:12:26.149684] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:25:34.644 [2024-07-13 06:12:26.149706] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:34.644 [2024-07-13 06:12:26.149717] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:34.644 [2024-07-13 06:12:26.149730] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:34.644 [2024-07-13 06:12:26.149745] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:34.644 [2024-07-13 06:12:26.149760] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:34.644 [2024-07-13 06:12:26.149772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:34.644 [2024-07-13 06:12:26.149785] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:34.644 [2024-07-13 06:12:26.149796] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:34.644 [2024-07-13 06:12:26.149809] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:34.644 [2024-07-13 06:12:26.149820] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:34.644 [2024-07-13 06:12:26.149833] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:34.644 [2024-07-13 06:12:26.149849] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:34.644 [2024-07-13 06:12:26.149870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:34.644 [2024-07-13 06:12:26.149883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:34.644 [2024-07-13 06:12:26.149896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:25:34.644 [2024-07-13 06:12:26.149919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:25:34.644 [2024-07-13 06:12:26.149933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:25:34.644 [2024-07-13 06:12:26.149944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:25:34.644 [2024-07-13 06:12:26.149957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:25:34.644 [2024-07-13 06:12:26.149969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:25:34.644 [2024-07-13 06:12:26.149983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:25:34.644 [2024-07-13 06:12:26.149995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:25:34.644 [2024-07-13 06:12:26.150008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:25:34.644 [2024-07-13 06:12:26.150019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:25:34.644 [2024-07-13 06:12:26.150031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:25:34.644 [2024-07-13 06:12:26.150042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:25:34.644 [2024-07-13 06:12:26.150055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:25:34.644 [2024-07-13 06:12:26.150066] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:34.644 [2024-07-13 06:12:26.150080] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:34.644 [2024-07-13 06:12:26.150092] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:34.644 [2024-07-13 06:12:26.150105] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:34.644 [2024-07-13 06:12:26.150116] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:34.644 [2024-07-13 06:12:26.150129] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:34.644 [2024-07-13 06:12:26.150141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.644 [2024-07-13 06:12:26.150154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:34.644 [2024-07-13 06:12:26.150196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.135 ms 00:25:34.644 [2024-07-13 06:12:26.150226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.644 [2024-07-13 06:12:26.150285] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:25:34.644 [2024-07-13 06:12:26.150304] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:25:36.544 [2024-07-13 06:12:28.196002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.544 [2024-07-13 06:12:28.196093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:25:36.544 [2024-07-13 06:12:28.196114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2045.731 ms 00:25:36.544 [2024-07-13 06:12:28.196128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.544 [2024-07-13 06:12:28.202896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.544 [2024-07-13 06:12:28.202974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:36.544 [2024-07-13 06:12:28.202992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.648 ms 00:25:36.544 [2024-07-13 06:12:28.203005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.544 [2024-07-13 06:12:28.203076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.544 [2024-07-13 06:12:28.203098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:36.544 [2024-07-13 06:12:28.203110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:25:36.544 [2024-07-13 06:12:28.203123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.544 [2024-07-13 06:12:28.210628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.544 [2024-07-13 06:12:28.210688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:36.544 [2024-07-13 06:12:28.210705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.407 ms 00:25:36.544 [2024-07-13 06:12:28.210718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.544 [2024-07-13 06:12:28.210756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.544 [2024-07-13 06:12:28.210772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:36.544 [2024-07-13 06:12:28.210784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:36.544 [2024-07-13 06:12:28.210796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.544 [2024-07-13 06:12:28.211129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.544 [2024-07-13 06:12:28.211202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:36.544 [2024-07-13 06:12:28.211218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.269 ms 00:25:36.544 [2024-07-13 06:12:28.211231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.544 [2024-07-13 06:12:28.211304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.544 [2024-07-13 06:12:28.211324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:36.544 [2024-07-13 06:12:28.211336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:25:36.544 [2024-07-13 06:12:28.211349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.544 [2024-07-13 06:12:28.216628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.544 [2024-07-13 06:12:28.216683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:36.544 [2024-07-13 06:12:28.216699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.246 ms 00:25:36.544 [2024-07-13 06:12:28.216712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.544 [2024-07-13 06:12:28.224678] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:36.544 [2024-07-13 06:12:28.225501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.544 [2024-07-13 06:12:28.225535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:36.544 [2024-07-13 06:12:28.225554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.703 ms 00:25:36.544 [2024-07-13 06:12:28.225566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.544 [2024-07-13 06:12:28.243642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.544 [2024-07-13 06:12:28.243684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:25:36.544 [2024-07-13 06:12:28.243724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.040 ms 00:25:36.544 [2024-07-13 06:12:28.243735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.544 [2024-07-13 06:12:28.243835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.544 [2024-07-13 06:12:28.243853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:36.544 [2024-07-13 06:12:28.243867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:25:36.544 [2024-07-13 06:12:28.243878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.544 [2024-07-13 06:12:28.247031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.544 [2024-07-13 06:12:28.247075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:25:36.544 [2024-07-13 06:12:28.247110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.125 ms 00:25:36.544 [2024-07-13 06:12:28.247132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.544 [2024-07-13 06:12:28.250208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.544 [2024-07-13 06:12:28.250254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:25:36.544 [2024-07-13 06:12:28.250289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.980 ms 00:25:36.544 [2024-07-13 06:12:28.250300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.544 [2024-07-13 06:12:28.250621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.544 [2024-07-13 06:12:28.250639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:36.544 [2024-07-13 06:12:28.250653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.277 ms 00:25:36.544 [2024-07-13 06:12:28.250664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.804 [2024-07-13 06:12:28.283443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.804 [2024-07-13 06:12:28.283504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:25:36.804 [2024-07-13 06:12:28.283559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.727 ms 00:25:36.804 [2024-07-13 06:12:28.283573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.804 [2024-07-13 06:12:28.287637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.804 [2024-07-13 06:12:28.287675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:25:36.804 [2024-07-13 06:12:28.287709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.013 ms 00:25:36.804 [2024-07-13 06:12:28.287730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.804 [2024-07-13 06:12:28.291404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.804 [2024-07-13 06:12:28.291442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:25:36.804 [2024-07-13 06:12:28.291476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.626 ms 00:25:36.804 [2024-07-13 06:12:28.291486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.804 [2024-07-13 06:12:28.295183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.804 [2024-07-13 06:12:28.295244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:25:36.804 [2024-07-13 06:12:28.295263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.650 ms 00:25:36.804 [2024-07-13 06:12:28.295274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.804 [2024-07-13 06:12:28.295329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.804 [2024-07-13 06:12:28.295345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:36.804 [2024-07-13 06:12:28.295359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:25:36.804 [2024-07-13 06:12:28.295369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.804 [2024-07-13 06:12:28.295472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:36.804 [2024-07-13 06:12:28.295487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:36.804 [2024-07-13 06:12:28.295501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:25:36.804 [2024-07-13 06:12:28.295514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:36.804 [2024-07-13 06:12:28.296713] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2156.793 ms, result 0 00:25:36.804 { 00:25:36.804 "name": "ftl", 00:25:36.804 "uuid": "abfae991-9d63-4260-a24f-7fe068285cb3" 00:25:36.804 } 00:25:36.804 06:12:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:25:37.063 [2024-07-13 06:12:28.554792] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.063 06:12:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:25:37.323 06:12:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:25:37.582 [2024-07-13 06:12:29.107407] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:25:37.582 06:12:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:25:37.840 [2024-07-13 06:12:29.359850] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:37.840 06:12:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:25:38.099 06:12:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:25:38.099 Fill FTL, iteration 1 00:25:38.099 06:12:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:25:38.099 06:12:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:25:38.099 06:12:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:25:38.099 06:12:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:25:38.099 06:12:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:25:38.099 06:12:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:25:38.099 06:12:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:25:38.099 06:12:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:25:38.099 06:12:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:38.099 06:12:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:25:38.099 06:12:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:38.099 06:12:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:38.099 06:12:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:38.099 06:12:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:38.099 06:12:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:25:38.099 06:12:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=95776 00:25:38.099 06:12:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:25:38.099 06:12:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:25:38.099 06:12:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 95776 /var/tmp/spdk.tgt.sock 00:25:38.099 06:12:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 95776 ']' 00:25:38.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:25:38.099 06:12:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:25:38.099 06:12:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:38.100 06:12:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:25:38.100 06:12:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:38.100 06:12:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:38.358 [2024-07-13 06:12:29.836614] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:25:38.358 [2024-07-13 06:12:29.836812] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95776 ] 00:25:38.358 [2024-07-13 06:12:29.985391] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.358 [2024-07-13 06:12:30.023698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.295 06:12:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:39.295 06:12:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:25:39.295 06:12:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:25:39.295 ftln1 00:25:39.554 06:12:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:25:39.554 06:12:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:25:39.554 06:12:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:25:39.554 06:12:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 95776 00:25:39.554 06:12:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 95776 ']' 00:25:39.554 06:12:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 95776 00:25:39.554 06:12:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:25:39.554 06:12:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:39.554 06:12:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95776 00:25:39.813 killing process with pid 95776 00:25:39.813 06:12:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:39.813 06:12:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:39.813 06:12:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95776' 00:25:39.813 06:12:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 95776 00:25:39.813 06:12:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 95776 00:25:40.072 06:12:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:25:40.072 06:12:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:40.072 [2024-07-13 06:12:31.641701] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:25:40.072 [2024-07-13 06:12:31.641851] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95812 ] 00:25:40.072 [2024-07-13 06:12:31.779252] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.331 [2024-07-13 06:12:31.814360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.363  Copying: 214/1024 [MB] (214 MBps) Copying: 431/1024 [MB] (217 MBps) Copying: 646/1024 [MB] (215 MBps) Copying: 864/1024 [MB] (218 MBps) Copying: 1024/1024 [MB] (average 215 MBps) 00:25:45.363 00:25:45.363 Calculate MD5 checksum, iteration 1 00:25:45.363 06:12:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:25:45.364 06:12:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:25:45.364 06:12:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:45.364 06:12:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:45.364 06:12:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:45.364 06:12:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:45.364 06:12:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:45.364 06:12:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:45.364 [2024-07-13 06:12:37.054351] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:25:45.364 [2024-07-13 06:12:37.054555] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95865 ] 00:25:45.622 [2024-07-13 06:12:37.199967] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.622 [2024-07-13 06:12:37.235158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.209  Copying: 462/1024 [MB] (462 MBps) Copying: 928/1024 [MB] (466 MBps) Copying: 1024/1024 [MB] (average 463 MBps) 00:25:48.209 00:25:48.209 06:12:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:25:48.209 06:12:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:50.113 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:50.113 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=5ef484417b79ca5f9285141834dd4b89 00:25:50.113 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:50.113 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:50.113 Fill FTL, iteration 2 00:25:50.113 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:25:50.113 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:50.113 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:50.113 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:50.113 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:50.113 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:50.113 06:12:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:50.113 [2024-07-13 06:12:41.737286] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:25:50.113 [2024-07-13 06:12:41.737470] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95919 ] 00:25:50.372 [2024-07-13 06:12:41.888354] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.372 [2024-07-13 06:12:41.931721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.484  Copying: 218/1024 [MB] (218 MBps) Copying: 435/1024 [MB] (217 MBps) Copying: 650/1024 [MB] (215 MBps) Copying: 864/1024 [MB] (214 MBps) Copying: 1024/1024 [MB] (average 215 MBps) 00:25:55.484 00:25:55.484 Calculate MD5 checksum, iteration 2 00:25:55.484 06:12:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:25:55.484 06:12:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:25:55.484 06:12:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:55.484 06:12:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:55.484 06:12:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:55.484 06:12:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:55.484 06:12:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:55.484 06:12:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:55.484 [2024-07-13 06:12:47.175100] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:25:55.484 [2024-07-13 06:12:47.175299] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95975 ] 00:25:55.742 [2024-07-13 06:12:47.322586] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.742 [2024-07-13 06:12:47.355687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.880  Copying: 457/1024 [MB] (457 MBps) Copying: 916/1024 [MB] (459 MBps) Copying: 1024/1024 [MB] (average 458 MBps) 00:25:58.880 00:25:58.880 06:12:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:25:58.880 06:12:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:00.780 06:12:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:00.780 06:12:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=8328fb0b08f0d3ca49e37b7c88881def 00:26:00.780 06:12:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:00.780 06:12:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:00.780 06:12:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:01.037 [2024-07-13 06:12:52.568898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.037 [2024-07-13 06:12:52.568951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:01.037 [2024-07-13 06:12:52.568970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:26:01.037 [2024-07-13 06:12:52.568981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.037 [2024-07-13 06:12:52.569013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.037 [2024-07-13 06:12:52.569034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:01.037 [2024-07-13 06:12:52.569045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:01.037 [2024-07-13 06:12:52.569055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.037 [2024-07-13 06:12:52.569080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.037 [2024-07-13 06:12:52.569117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:01.037 [2024-07-13 06:12:52.569145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:01.037 [2024-07-13 06:12:52.569205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.037 [2024-07-13 06:12:52.569315] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.378 ms, result 0 00:26:01.037 true 00:26:01.037 06:12:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:01.295 { 00:26:01.295 "name": "ftl", 00:26:01.295 "properties": [ 00:26:01.295 { 00:26:01.295 "name": "superblock_version", 00:26:01.295 "value": 5, 00:26:01.295 "read-only": true 00:26:01.295 }, 00:26:01.295 { 00:26:01.295 "name": "base_device", 00:26:01.295 "bands": [ 00:26:01.295 { 00:26:01.295 "id": 0, 00:26:01.295 "state": "FREE", 00:26:01.295 "validity": 0.0 00:26:01.295 }, 00:26:01.295 { 00:26:01.295 "id": 1, 00:26:01.295 "state": "FREE", 00:26:01.295 "validity": 0.0 00:26:01.295 }, 00:26:01.295 { 00:26:01.295 "id": 2, 00:26:01.295 "state": "FREE", 00:26:01.295 "validity": 0.0 00:26:01.295 }, 00:26:01.295 { 00:26:01.295 "id": 3, 00:26:01.295 "state": "FREE", 00:26:01.295 "validity": 0.0 00:26:01.295 }, 00:26:01.295 { 00:26:01.295 "id": 4, 00:26:01.295 "state": "FREE", 00:26:01.295 "validity": 0.0 00:26:01.295 }, 00:26:01.295 { 00:26:01.295 "id": 5, 00:26:01.295 "state": "FREE", 00:26:01.295 "validity": 0.0 00:26:01.295 }, 00:26:01.295 { 00:26:01.295 "id": 6, 00:26:01.295 "state": "FREE", 00:26:01.295 "validity": 0.0 00:26:01.295 }, 00:26:01.295 { 00:26:01.295 "id": 7, 00:26:01.295 "state": "FREE", 00:26:01.295 "validity": 0.0 00:26:01.295 }, 00:26:01.295 { 00:26:01.295 "id": 8, 00:26:01.295 "state": "FREE", 00:26:01.295 "validity": 0.0 00:26:01.295 }, 00:26:01.295 { 00:26:01.295 "id": 9, 00:26:01.295 "state": "FREE", 00:26:01.295 "validity": 0.0 00:26:01.295 }, 00:26:01.295 { 00:26:01.295 "id": 10, 00:26:01.295 "state": "FREE", 00:26:01.295 "validity": 0.0 00:26:01.295 }, 00:26:01.295 { 00:26:01.295 "id": 11, 00:26:01.295 "state": "FREE", 00:26:01.295 "validity": 0.0 00:26:01.295 }, 00:26:01.295 { 00:26:01.295 "id": 12, 00:26:01.295 "state": "FREE", 00:26:01.295 "validity": 0.0 00:26:01.295 }, 00:26:01.295 { 00:26:01.295 "id": 13, 00:26:01.295 "state": "FREE", 00:26:01.295 "validity": 0.0 00:26:01.295 }, 00:26:01.295 { 00:26:01.295 "id": 14, 00:26:01.295 "state": "FREE", 00:26:01.295 "validity": 0.0 00:26:01.295 }, 00:26:01.295 { 00:26:01.295 "id": 15, 00:26:01.295 "state": "FREE", 00:26:01.295 "validity": 0.0 00:26:01.295 }, 00:26:01.295 { 00:26:01.295 "id": 16, 00:26:01.295 "state": "FREE", 00:26:01.295 "validity": 0.0 00:26:01.295 }, 00:26:01.295 { 00:26:01.295 "id": 17, 00:26:01.295 "state": "FREE", 00:26:01.295 "validity": 0.0 00:26:01.295 } 00:26:01.295 ], 00:26:01.295 "read-only": true 00:26:01.295 }, 00:26:01.295 { 00:26:01.295 "name": "cache_device", 00:26:01.295 "type": "bdev", 00:26:01.295 "chunks": [ 00:26:01.295 { 00:26:01.295 "id": 0, 00:26:01.295 "state": "INACTIVE", 00:26:01.295 "utilization": 0.0 00:26:01.295 }, 00:26:01.295 { 00:26:01.295 "id": 1, 00:26:01.295 "state": "CLOSED", 00:26:01.295 "utilization": 1.0 00:26:01.295 }, 00:26:01.295 { 00:26:01.295 "id": 2, 00:26:01.295 "state": "CLOSED", 00:26:01.295 "utilization": 1.0 00:26:01.295 }, 00:26:01.295 { 00:26:01.295 "id": 3, 00:26:01.295 "state": "OPEN", 00:26:01.295 "utilization": 0.001953125 00:26:01.295 }, 00:26:01.295 { 00:26:01.295 "id": 4, 00:26:01.295 "state": "OPEN", 00:26:01.295 "utilization": 0.0 00:26:01.295 } 00:26:01.295 ], 00:26:01.295 "read-only": true 00:26:01.295 }, 00:26:01.295 { 00:26:01.295 "name": "verbose_mode", 00:26:01.295 "value": true, 00:26:01.295 "unit": "", 00:26:01.295 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:01.295 }, 00:26:01.295 { 00:26:01.295 "name": "prep_upgrade_on_shutdown", 00:26:01.295 "value": false, 00:26:01.295 "unit": "", 00:26:01.295 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:01.295 } 00:26:01.295 ] 00:26:01.295 } 00:26:01.295 06:12:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:26:01.553 [2024-07-13 06:12:53.077410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.553 [2024-07-13 06:12:53.077456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:01.553 [2024-07-13 06:12:53.077490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:01.553 [2024-07-13 06:12:53.077516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.553 [2024-07-13 06:12:53.077545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.553 [2024-07-13 06:12:53.077558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:01.553 [2024-07-13 06:12:53.077568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:01.553 [2024-07-13 06:12:53.077576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.553 [2024-07-13 06:12:53.077599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.553 [2024-07-13 06:12:53.077610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:01.553 [2024-07-13 06:12:53.077619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:01.553 [2024-07-13 06:12:53.077627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.553 [2024-07-13 06:12:53.077687] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.265 ms, result 0 00:26:01.553 true 00:26:01.553 06:12:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:26:01.553 06:12:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:01.553 06:12:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:01.811 06:12:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:26:01.811 06:12:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:26:01.811 06:12:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:01.811 [2024-07-13 06:12:53.473930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.811 [2024-07-13 06:12:53.473978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:01.811 [2024-07-13 06:12:53.474042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:01.811 [2024-07-13 06:12:53.474052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.811 [2024-07-13 06:12:53.474084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.811 [2024-07-13 06:12:53.474098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:01.811 [2024-07-13 06:12:53.474108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:01.811 [2024-07-13 06:12:53.474116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.811 [2024-07-13 06:12:53.474155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:01.811 [2024-07-13 06:12:53.474183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:01.811 [2024-07-13 06:12:53.474206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:01.811 [2024-07-13 06:12:53.474218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:01.811 [2024-07-13 06:12:53.474287] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.343 ms, result 0 00:26:01.811 true 00:26:01.811 06:12:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:02.070 { 00:26:02.070 "name": "ftl", 00:26:02.070 "properties": [ 00:26:02.070 { 00:26:02.070 "name": "superblock_version", 00:26:02.070 "value": 5, 00:26:02.070 "read-only": true 00:26:02.070 }, 00:26:02.070 { 00:26:02.070 "name": "base_device", 00:26:02.070 "bands": [ 00:26:02.070 { 00:26:02.070 "id": 0, 00:26:02.070 "state": "FREE", 00:26:02.070 "validity": 0.0 00:26:02.070 }, 00:26:02.070 { 00:26:02.070 "id": 1, 00:26:02.070 "state": "FREE", 00:26:02.070 "validity": 0.0 00:26:02.070 }, 00:26:02.070 { 00:26:02.070 "id": 2, 00:26:02.070 "state": "FREE", 00:26:02.070 "validity": 0.0 00:26:02.070 }, 00:26:02.070 { 00:26:02.070 "id": 3, 00:26:02.070 "state": "FREE", 00:26:02.070 "validity": 0.0 00:26:02.070 }, 00:26:02.070 { 00:26:02.070 "id": 4, 00:26:02.070 "state": "FREE", 00:26:02.070 "validity": 0.0 00:26:02.070 }, 00:26:02.070 { 00:26:02.070 "id": 5, 00:26:02.070 "state": "FREE", 00:26:02.070 "validity": 0.0 00:26:02.070 }, 00:26:02.070 { 00:26:02.070 "id": 6, 00:26:02.070 "state": "FREE", 00:26:02.070 "validity": 0.0 00:26:02.070 }, 00:26:02.070 { 00:26:02.070 "id": 7, 00:26:02.070 "state": "FREE", 00:26:02.070 "validity": 0.0 00:26:02.070 }, 00:26:02.070 { 00:26:02.070 "id": 8, 00:26:02.070 "state": "FREE", 00:26:02.070 "validity": 0.0 00:26:02.070 }, 00:26:02.070 { 00:26:02.070 "id": 9, 00:26:02.070 "state": "FREE", 00:26:02.070 "validity": 0.0 00:26:02.070 }, 00:26:02.070 { 00:26:02.070 "id": 10, 00:26:02.070 "state": "FREE", 00:26:02.070 "validity": 0.0 00:26:02.070 }, 00:26:02.070 { 00:26:02.070 "id": 11, 00:26:02.070 "state": "FREE", 00:26:02.070 "validity": 0.0 00:26:02.070 }, 00:26:02.070 { 00:26:02.070 "id": 12, 00:26:02.070 "state": "FREE", 00:26:02.070 "validity": 0.0 00:26:02.070 }, 00:26:02.070 { 00:26:02.070 "id": 13, 00:26:02.070 "state": "FREE", 00:26:02.070 "validity": 0.0 00:26:02.070 }, 00:26:02.070 { 00:26:02.070 "id": 14, 00:26:02.070 "state": "FREE", 00:26:02.070 "validity": 0.0 00:26:02.070 }, 00:26:02.070 { 00:26:02.070 "id": 15, 00:26:02.070 "state": "FREE", 00:26:02.070 "validity": 0.0 00:26:02.070 }, 00:26:02.070 { 00:26:02.070 "id": 16, 00:26:02.070 "state": "FREE", 00:26:02.070 "validity": 0.0 00:26:02.070 }, 00:26:02.070 { 00:26:02.070 "id": 17, 00:26:02.070 "state": "FREE", 00:26:02.070 "validity": 0.0 00:26:02.070 } 00:26:02.070 ], 00:26:02.070 "read-only": true 00:26:02.070 }, 00:26:02.070 { 00:26:02.070 "name": "cache_device", 00:26:02.070 "type": "bdev", 00:26:02.070 "chunks": [ 00:26:02.070 { 00:26:02.070 "id": 0, 00:26:02.070 "state": "INACTIVE", 00:26:02.070 "utilization": 0.0 00:26:02.070 }, 00:26:02.070 { 00:26:02.070 "id": 1, 00:26:02.070 "state": "CLOSED", 00:26:02.070 "utilization": 1.0 00:26:02.070 }, 00:26:02.070 { 00:26:02.070 "id": 2, 00:26:02.070 "state": "CLOSED", 00:26:02.070 "utilization": 1.0 00:26:02.070 }, 00:26:02.070 { 00:26:02.070 "id": 3, 00:26:02.070 "state": "OPEN", 00:26:02.070 "utilization": 0.001953125 00:26:02.070 }, 00:26:02.070 { 00:26:02.070 "id": 4, 00:26:02.070 "state": "OPEN", 00:26:02.070 "utilization": 0.0 00:26:02.070 } 00:26:02.070 ], 00:26:02.070 "read-only": true 00:26:02.070 }, 00:26:02.070 { 00:26:02.070 "name": "verbose_mode", 00:26:02.070 "value": true, 00:26:02.070 "unit": "", 00:26:02.070 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:02.070 }, 00:26:02.070 { 00:26:02.070 "name": "prep_upgrade_on_shutdown", 00:26:02.070 "value": true, 00:26:02.070 "unit": "", 00:26:02.070 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:02.070 } 00:26:02.070 ] 00:26:02.070 } 00:26:02.070 06:12:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:26:02.070 06:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 95669 ]] 00:26:02.070 06:12:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 95669 00:26:02.070 06:12:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 95669 ']' 00:26:02.070 06:12:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 95669 00:26:02.070 06:12:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:26:02.070 06:12:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:02.070 06:12:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95669 00:26:02.070 06:12:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:02.070 06:12:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:02.070 06:12:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95669' 00:26:02.070 killing process with pid 95669 00:26:02.070 06:12:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 95669 00:26:02.070 06:12:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 95669 00:26:02.329 [2024-07-13 06:12:53.817582] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:26:02.329 [2024-07-13 06:12:53.821543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.329 [2024-07-13 06:12:53.821714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:02.329 [2024-07-13 06:12:53.821845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:02.329 [2024-07-13 06:12:53.821893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.329 [2024-07-13 06:12:53.822021] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:02.329 [2024-07-13 06:12:53.822587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.329 [2024-07-13 06:12:53.822756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:02.329 [2024-07-13 06:12:53.822781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.491 ms 00:26:02.329 [2024-07-13 06:12:53.822801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.450 [2024-07-13 06:13:01.684383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.450 [2024-07-13 06:13:01.684446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:10.450 [2024-07-13 06:13:01.684482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7861.597 ms 00:26:10.450 [2024-07-13 06:13:01.684499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.450 [2024-07-13 06:13:01.685796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.450 [2024-07-13 06:13:01.685859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:10.450 [2024-07-13 06:13:01.685874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.276 ms 00:26:10.450 [2024-07-13 06:13:01.685885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.450 [2024-07-13 06:13:01.686990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.450 [2024-07-13 06:13:01.687018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:26:10.450 [2024-07-13 06:13:01.687043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.083 ms 00:26:10.450 [2024-07-13 06:13:01.687053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.450 [2024-07-13 06:13:01.688480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.450 [2024-07-13 06:13:01.688545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:10.450 [2024-07-13 06:13:01.688574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.369 ms 00:26:10.450 [2024-07-13 06:13:01.688584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.450 [2024-07-13 06:13:01.690795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.450 [2024-07-13 06:13:01.690848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:10.450 [2024-07-13 06:13:01.690863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.189 ms 00:26:10.450 [2024-07-13 06:13:01.690872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.450 [2024-07-13 06:13:01.690938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.450 [2024-07-13 06:13:01.690954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:10.450 [2024-07-13 06:13:01.690971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:26:10.450 [2024-07-13 06:13:01.690981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.450 [2024-07-13 06:13:01.692314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.450 [2024-07-13 06:13:01.692347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:26:10.450 [2024-07-13 06:13:01.692361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.314 ms 00:26:10.450 [2024-07-13 06:13:01.692370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.450 [2024-07-13 06:13:01.693589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.450 [2024-07-13 06:13:01.693636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:26:10.450 [2024-07-13 06:13:01.693665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.198 ms 00:26:10.450 [2024-07-13 06:13:01.693675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.450 [2024-07-13 06:13:01.694909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.450 [2024-07-13 06:13:01.694958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:10.450 [2024-07-13 06:13:01.694988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.213 ms 00:26:10.450 [2024-07-13 06:13:01.694997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.450 [2024-07-13 06:13:01.696013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.450 [2024-07-13 06:13:01.696047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:10.450 [2024-07-13 06:13:01.696077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.968 ms 00:26:10.450 [2024-07-13 06:13:01.696087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.450 [2024-07-13 06:13:01.696122] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:10.450 [2024-07-13 06:13:01.696139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:10.450 [2024-07-13 06:13:01.696150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:10.450 [2024-07-13 06:13:01.696170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:10.450 [2024-07-13 06:13:01.696183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:10.450 [2024-07-13 06:13:01.696192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:10.450 [2024-07-13 06:13:01.696202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:10.450 [2024-07-13 06:13:01.696211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:10.450 [2024-07-13 06:13:01.696221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:10.450 [2024-07-13 06:13:01.696230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:10.450 [2024-07-13 06:13:01.696240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:10.450 [2024-07-13 06:13:01.696249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:10.450 [2024-07-13 06:13:01.696259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:10.450 [2024-07-13 06:13:01.696268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:10.450 [2024-07-13 06:13:01.696277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:10.450 [2024-07-13 06:13:01.696287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:10.450 [2024-07-13 06:13:01.696296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:10.450 [2024-07-13 06:13:01.696305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:10.450 [2024-07-13 06:13:01.696314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:10.450 [2024-07-13 06:13:01.696326] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:10.450 [2024-07-13 06:13:01.696335] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: abfae991-9d63-4260-a24f-7fe068285cb3 00:26:10.450 [2024-07-13 06:13:01.696345] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:10.450 [2024-07-13 06:13:01.696354] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:26:10.450 [2024-07-13 06:13:01.696362] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:26:10.450 [2024-07-13 06:13:01.696372] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:26:10.450 [2024-07-13 06:13:01.696381] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:10.450 [2024-07-13 06:13:01.696390] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:10.450 [2024-07-13 06:13:01.696405] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:10.451 [2024-07-13 06:13:01.696414] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:10.451 [2024-07-13 06:13:01.696422] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:10.451 [2024-07-13 06:13:01.696431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.451 [2024-07-13 06:13:01.696441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:10.451 [2024-07-13 06:13:01.696451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.310 ms 00:26:10.451 [2024-07-13 06:13:01.696461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.451 [2024-07-13 06:13:01.697894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.451 [2024-07-13 06:13:01.698043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:10.451 [2024-07-13 06:13:01.698214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.413 ms 00:26:10.451 [2024-07-13 06:13:01.698275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.451 [2024-07-13 06:13:01.698459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:10.451 [2024-07-13 06:13:01.698536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:10.451 [2024-07-13 06:13:01.698726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:26:10.451 [2024-07-13 06:13:01.698775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.451 [2024-07-13 06:13:01.703486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:10.451 [2024-07-13 06:13:01.703667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:10.451 [2024-07-13 06:13:01.703771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:10.451 [2024-07-13 06:13:01.703825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.451 [2024-07-13 06:13:01.703884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:10.451 [2024-07-13 06:13:01.703963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:10.451 [2024-07-13 06:13:01.704011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:10.451 [2024-07-13 06:13:01.704044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.451 [2024-07-13 06:13:01.704150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:10.451 [2024-07-13 06:13:01.704265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:10.451 [2024-07-13 06:13:01.704282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:10.451 [2024-07-13 06:13:01.704305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.451 [2024-07-13 06:13:01.704338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:10.451 [2024-07-13 06:13:01.704350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:10.451 [2024-07-13 06:13:01.704360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:10.451 [2024-07-13 06:13:01.704370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.451 [2024-07-13 06:13:01.713285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:10.451 [2024-07-13 06:13:01.713621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:10.451 [2024-07-13 06:13:01.713826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:10.451 [2024-07-13 06:13:01.713897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.451 [2024-07-13 06:13:01.720958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:10.451 [2024-07-13 06:13:01.721150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:10.451 [2024-07-13 06:13:01.721277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:10.451 [2024-07-13 06:13:01.721390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.451 [2024-07-13 06:13:01.721553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:10.451 [2024-07-13 06:13:01.721587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:10.451 [2024-07-13 06:13:01.721601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:10.451 [2024-07-13 06:13:01.721613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.451 [2024-07-13 06:13:01.721660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:10.451 [2024-07-13 06:13:01.721682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:10.451 [2024-07-13 06:13:01.721695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:10.451 [2024-07-13 06:13:01.721706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.451 [2024-07-13 06:13:01.721813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:10.451 [2024-07-13 06:13:01.721834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:10.451 [2024-07-13 06:13:01.721846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:10.451 [2024-07-13 06:13:01.721858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.451 [2024-07-13 06:13:01.721964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:10.451 [2024-07-13 06:13:01.721986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:10.451 [2024-07-13 06:13:01.722012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:10.451 [2024-07-13 06:13:01.722021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.451 [2024-07-13 06:13:01.722066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:10.451 [2024-07-13 06:13:01.722079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:10.451 [2024-07-13 06:13:01.722090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:10.451 [2024-07-13 06:13:01.722099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.451 [2024-07-13 06:13:01.722147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:10.451 [2024-07-13 06:13:01.722167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:10.451 [2024-07-13 06:13:01.722178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:10.451 [2024-07-13 06:13:01.722188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:10.451 [2024-07-13 06:13:01.722352] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7900.828 ms, result 0 00:26:12.985 06:13:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:12.985 06:13:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:26:12.985 06:13:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:12.985 06:13:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:12.985 06:13:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:12.985 06:13:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=96149 00:26:12.985 06:13:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:12.985 06:13:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:12.985 06:13:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 96149 00:26:12.985 06:13:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 96149 ']' 00:26:12.985 06:13:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.985 06:13:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:12.985 06:13:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.985 06:13:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:12.985 06:13:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:12.985 [2024-07-13 06:13:04.611707] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:26:12.985 [2024-07-13 06:13:04.612097] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96149 ] 00:26:13.244 [2024-07-13 06:13:04.758629] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.244 [2024-07-13 06:13:04.790586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.504 [2024-07-13 06:13:05.022883] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:13.504 [2024-07-13 06:13:05.023256] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:13.504 [2024-07-13 06:13:05.167044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.504 [2024-07-13 06:13:05.167244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:13.504 [2024-07-13 06:13:05.167402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:13.504 [2024-07-13 06:13:05.167455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.504 [2024-07-13 06:13:05.167651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.504 [2024-07-13 06:13:05.167706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:13.504 [2024-07-13 06:13:05.167744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:26:13.504 [2024-07-13 06:13:05.167880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.504 [2024-07-13 06:13:05.167923] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:13.504 [2024-07-13 06:13:05.168193] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:13.504 [2024-07-13 06:13:05.168218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.504 [2024-07-13 06:13:05.168229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:13.504 [2024-07-13 06:13:05.168240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.303 ms 00:26:13.504 [2024-07-13 06:13:05.168250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.504 [2024-07-13 06:13:05.169393] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:13.504 [2024-07-13 06:13:05.171528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.504 [2024-07-13 06:13:05.171565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:13.504 [2024-07-13 06:13:05.171581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.138 ms 00:26:13.504 [2024-07-13 06:13:05.171590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.504 [2024-07-13 06:13:05.171653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.504 [2024-07-13 06:13:05.171670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:13.504 [2024-07-13 06:13:05.171681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:26:13.504 [2024-07-13 06:13:05.171690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.504 [2024-07-13 06:13:05.175855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.504 [2024-07-13 06:13:05.175892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:13.504 [2024-07-13 06:13:05.175906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.071 ms 00:26:13.504 [2024-07-13 06:13:05.175916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.504 [2024-07-13 06:13:05.175964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.504 [2024-07-13 06:13:05.175980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:13.504 [2024-07-13 06:13:05.175990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:26:13.504 [2024-07-13 06:13:05.176003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.504 [2024-07-13 06:13:05.176069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.504 [2024-07-13 06:13:05.176086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:13.504 [2024-07-13 06:13:05.176096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:13.504 [2024-07-13 06:13:05.176106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.504 [2024-07-13 06:13:05.176169] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:13.504 [2024-07-13 06:13:05.177484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.504 [2024-07-13 06:13:05.177547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:13.504 [2024-07-13 06:13:05.177566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.354 ms 00:26:13.504 [2024-07-13 06:13:05.177602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.504 [2024-07-13 06:13:05.177645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.504 [2024-07-13 06:13:05.177659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:13.504 [2024-07-13 06:13:05.177670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:13.504 [2024-07-13 06:13:05.177680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.505 [2024-07-13 06:13:05.177707] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:13.505 [2024-07-13 06:13:05.177733] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:13.505 [2024-07-13 06:13:05.177769] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:13.505 [2024-07-13 06:13:05.177790] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:26:13.505 [2024-07-13 06:13:05.177880] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:13.505 [2024-07-13 06:13:05.177893] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:13.505 [2024-07-13 06:13:05.177906] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:26:13.505 [2024-07-13 06:13:05.177919] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:13.505 [2024-07-13 06:13:05.177931] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:13.505 [2024-07-13 06:13:05.177945] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:13.505 [2024-07-13 06:13:05.177972] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:13.505 [2024-07-13 06:13:05.177982] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:13.505 [2024-07-13 06:13:05.178008] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:13.505 [2024-07-13 06:13:05.178027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.505 [2024-07-13 06:13:05.178036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:13.505 [2024-07-13 06:13:05.178046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.321 ms 00:26:13.505 [2024-07-13 06:13:05.178055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.505 [2024-07-13 06:13:05.178126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.505 [2024-07-13 06:13:05.178154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:13.505 [2024-07-13 06:13:05.178164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:26:13.505 [2024-07-13 06:13:05.178188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.505 [2024-07-13 06:13:05.178312] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:13.505 [2024-07-13 06:13:05.178333] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:13.505 [2024-07-13 06:13:05.178344] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:13.505 [2024-07-13 06:13:05.178354] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:13.505 [2024-07-13 06:13:05.178380] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:13.505 [2024-07-13 06:13:05.178390] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:13.505 [2024-07-13 06:13:05.178400] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:13.505 [2024-07-13 06:13:05.178409] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:13.505 [2024-07-13 06:13:05.178419] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:13.505 [2024-07-13 06:13:05.178428] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:13.505 [2024-07-13 06:13:05.178438] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:13.505 [2024-07-13 06:13:05.178447] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:13.505 [2024-07-13 06:13:05.178456] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:13.505 [2024-07-13 06:13:05.178481] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:13.505 [2024-07-13 06:13:05.178492] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:13.505 [2024-07-13 06:13:05.178502] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:13.505 [2024-07-13 06:13:05.178511] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:13.505 [2024-07-13 06:13:05.178520] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:13.505 [2024-07-13 06:13:05.178545] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:13.505 [2024-07-13 06:13:05.178569] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:13.505 [2024-07-13 06:13:05.178581] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:13.505 [2024-07-13 06:13:05.178591] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:13.505 [2024-07-13 06:13:05.178601] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:13.505 [2024-07-13 06:13:05.178610] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:13.505 [2024-07-13 06:13:05.178619] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:13.505 [2024-07-13 06:13:05.178629] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:13.505 [2024-07-13 06:13:05.178638] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:13.505 [2024-07-13 06:13:05.178647] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:13.505 [2024-07-13 06:13:05.178656] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:13.505 [2024-07-13 06:13:05.178665] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:13.505 [2024-07-13 06:13:05.178674] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:13.505 [2024-07-13 06:13:05.178684] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:13.505 [2024-07-13 06:13:05.178693] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:13.505 [2024-07-13 06:13:05.178702] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:13.505 [2024-07-13 06:13:05.178712] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:13.505 [2024-07-13 06:13:05.178721] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:13.505 [2024-07-13 06:13:05.178733] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:13.505 [2024-07-13 06:13:05.178743] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:13.505 [2024-07-13 06:13:05.178752] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:13.505 [2024-07-13 06:13:05.178761] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:13.505 [2024-07-13 06:13:05.178770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:13.505 [2024-07-13 06:13:05.178779] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:13.505 [2024-07-13 06:13:05.178789] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:13.505 [2024-07-13 06:13:05.178798] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:13.505 [2024-07-13 06:13:05.178808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:13.505 [2024-07-13 06:13:05.178817] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:13.505 [2024-07-13 06:13:05.178829] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:13.505 [2024-07-13 06:13:05.178840] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:13.505 [2024-07-13 06:13:05.178849] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:13.505 [2024-07-13 06:13:05.178859] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:13.505 [2024-07-13 06:13:05.178868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:13.505 [2024-07-13 06:13:05.178877] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:13.505 [2024-07-13 06:13:05.178892] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:13.505 [2024-07-13 06:13:05.178904] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:13.505 [2024-07-13 06:13:05.178917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:13.505 [2024-07-13 06:13:05.178931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:13.505 [2024-07-13 06:13:05.178941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:13.505 [2024-07-13 06:13:05.178951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:13.505 [2024-07-13 06:13:05.178961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:13.505 [2024-07-13 06:13:05.178971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:13.505 [2024-07-13 06:13:05.178981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:13.505 [2024-07-13 06:13:05.178991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:13.505 [2024-07-13 06:13:05.179001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:13.505 [2024-07-13 06:13:05.179011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:13.505 [2024-07-13 06:13:05.179021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:13.505 [2024-07-13 06:13:05.179032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:13.505 [2024-07-13 06:13:05.179042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:13.505 [2024-07-13 06:13:05.179052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:13.505 [2024-07-13 06:13:05.179065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:13.505 [2024-07-13 06:13:05.179076] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:13.505 [2024-07-13 06:13:05.179087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:13.506 [2024-07-13 06:13:05.179097] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:13.506 [2024-07-13 06:13:05.179108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:13.506 [2024-07-13 06:13:05.179118] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:13.506 [2024-07-13 06:13:05.179128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:13.506 [2024-07-13 06:13:05.179139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.506 [2024-07-13 06:13:05.179148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:13.506 [2024-07-13 06:13:05.179168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.883 ms 00:26:13.506 [2024-07-13 06:13:05.179213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.506 [2024-07-13 06:13:05.179273] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:13.506 [2024-07-13 06:13:05.179290] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:16.039 [2024-07-13 06:13:07.353273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.039 [2024-07-13 06:13:07.353337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:16.039 [2024-07-13 06:13:07.353381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2174.014 ms 00:26:16.039 [2024-07-13 06:13:07.353395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.039 [2024-07-13 06:13:07.360409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.039 [2024-07-13 06:13:07.360471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:16.039 [2024-07-13 06:13:07.360489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.883 ms 00:26:16.039 [2024-07-13 06:13:07.360515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.039 [2024-07-13 06:13:07.360625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.039 [2024-07-13 06:13:07.360640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:16.039 [2024-07-13 06:13:07.360657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:16.039 [2024-07-13 06:13:07.360667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.039 [2024-07-13 06:13:07.368426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.039 [2024-07-13 06:13:07.368478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:16.039 [2024-07-13 06:13:07.368527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.691 ms 00:26:16.039 [2024-07-13 06:13:07.368561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.039 [2024-07-13 06:13:07.368604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.039 [2024-07-13 06:13:07.368624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:16.039 [2024-07-13 06:13:07.368634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:16.039 [2024-07-13 06:13:07.368652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.039 [2024-07-13 06:13:07.368961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.039 [2024-07-13 06:13:07.368977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:16.039 [2024-07-13 06:13:07.368989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.268 ms 00:26:16.039 [2024-07-13 06:13:07.369012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.039 [2024-07-13 06:13:07.369055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.039 [2024-07-13 06:13:07.369068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:16.039 [2024-07-13 06:13:07.369082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:26:16.039 [2024-07-13 06:13:07.369117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.039 [2024-07-13 06:13:07.374350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.039 [2024-07-13 06:13:07.374386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:16.039 [2024-07-13 06:13:07.374402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.154 ms 00:26:16.039 [2024-07-13 06:13:07.374412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.039 [2024-07-13 06:13:07.376624] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:16.039 [2024-07-13 06:13:07.376666] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:16.039 [2024-07-13 06:13:07.376704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.039 [2024-07-13 06:13:07.376715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:26:16.039 [2024-07-13 06:13:07.376726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.183 ms 00:26:16.039 [2024-07-13 06:13:07.376736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.039 [2024-07-13 06:13:07.380358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.039 [2024-07-13 06:13:07.380393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:26:16.039 [2024-07-13 06:13:07.380424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.595 ms 00:26:16.039 [2024-07-13 06:13:07.380434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.039 [2024-07-13 06:13:07.382042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.039 [2024-07-13 06:13:07.382077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:26:16.039 [2024-07-13 06:13:07.382108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.563 ms 00:26:16.040 [2024-07-13 06:13:07.382118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.040 [2024-07-13 06:13:07.383639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.040 [2024-07-13 06:13:07.383673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:26:16.040 [2024-07-13 06:13:07.383704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.496 ms 00:26:16.040 [2024-07-13 06:13:07.383713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.040 [2024-07-13 06:13:07.384034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.040 [2024-07-13 06:13:07.384054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:16.040 [2024-07-13 06:13:07.384076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.261 ms 00:26:16.040 [2024-07-13 06:13:07.384086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.040 [2024-07-13 06:13:07.413362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.040 [2024-07-13 06:13:07.413434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:16.040 [2024-07-13 06:13:07.413454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.250 ms 00:26:16.040 [2024-07-13 06:13:07.413465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.040 [2024-07-13 06:13:07.420291] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:16.040 [2024-07-13 06:13:07.420883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.040 [2024-07-13 06:13:07.420917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:16.040 [2024-07-13 06:13:07.420933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.350 ms 00:26:16.040 [2024-07-13 06:13:07.420944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.040 [2024-07-13 06:13:07.421030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.040 [2024-07-13 06:13:07.421050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:26:16.040 [2024-07-13 06:13:07.421062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:16.040 [2024-07-13 06:13:07.421072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.040 [2024-07-13 06:13:07.421220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.040 [2024-07-13 06:13:07.421242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:16.040 [2024-07-13 06:13:07.421254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:26:16.040 [2024-07-13 06:13:07.421265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.040 [2024-07-13 06:13:07.421300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.040 [2024-07-13 06:13:07.421326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:16.040 [2024-07-13 06:13:07.421337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:16.040 [2024-07-13 06:13:07.421348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.040 [2024-07-13 06:13:07.421411] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:16.040 [2024-07-13 06:13:07.421430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.040 [2024-07-13 06:13:07.421444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:16.040 [2024-07-13 06:13:07.421486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:26:16.040 [2024-07-13 06:13:07.421498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.040 [2024-07-13 06:13:07.424560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.040 [2024-07-13 06:13:07.424598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:16.040 [2024-07-13 06:13:07.424613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.034 ms 00:26:16.040 [2024-07-13 06:13:07.424624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.040 [2024-07-13 06:13:07.424692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.040 [2024-07-13 06:13:07.424709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:16.040 [2024-07-13 06:13:07.424727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:26:16.040 [2024-07-13 06:13:07.424745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.040 [2024-07-13 06:13:07.426255] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2258.689 ms, result 0 00:26:16.040 [2024-07-13 06:13:07.441558] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:16.040 [2024-07-13 06:13:07.457555] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:16.040 [2024-07-13 06:13:07.465674] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:16.040 06:13:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:16.040 06:13:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:26:16.040 06:13:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:16.040 06:13:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:16.040 06:13:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:16.040 [2024-07-13 06:13:07.757819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.040 [2024-07-13 06:13:07.757862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:16.040 [2024-07-13 06:13:07.757889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:16.040 [2024-07-13 06:13:07.757900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.040 [2024-07-13 06:13:07.757929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.040 [2024-07-13 06:13:07.757943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:16.040 [2024-07-13 06:13:07.757953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:16.040 [2024-07-13 06:13:07.757962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.040 [2024-07-13 06:13:07.757991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.040 [2024-07-13 06:13:07.758003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:16.040 [2024-07-13 06:13:07.758013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:16.040 [2024-07-13 06:13:07.758031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.040 [2024-07-13 06:13:07.758091] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.270 ms, result 0 00:26:16.040 true 00:26:16.298 06:13:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:16.298 { 00:26:16.298 "name": "ftl", 00:26:16.298 "properties": [ 00:26:16.298 { 00:26:16.298 "name": "superblock_version", 00:26:16.298 "value": 5, 00:26:16.298 "read-only": true 00:26:16.298 }, 00:26:16.298 { 00:26:16.298 "name": "base_device", 00:26:16.298 "bands": [ 00:26:16.298 { 00:26:16.298 "id": 0, 00:26:16.298 "state": "CLOSED", 00:26:16.298 "validity": 1.0 00:26:16.298 }, 00:26:16.298 { 00:26:16.298 "id": 1, 00:26:16.298 "state": "CLOSED", 00:26:16.298 "validity": 1.0 00:26:16.298 }, 00:26:16.298 { 00:26:16.298 "id": 2, 00:26:16.298 "state": "CLOSED", 00:26:16.298 "validity": 0.007843137254901933 00:26:16.298 }, 00:26:16.298 { 00:26:16.298 "id": 3, 00:26:16.298 "state": "FREE", 00:26:16.298 "validity": 0.0 00:26:16.298 }, 00:26:16.298 { 00:26:16.298 "id": 4, 00:26:16.298 "state": "FREE", 00:26:16.298 "validity": 0.0 00:26:16.298 }, 00:26:16.298 { 00:26:16.298 "id": 5, 00:26:16.298 "state": "FREE", 00:26:16.298 "validity": 0.0 00:26:16.298 }, 00:26:16.298 { 00:26:16.298 "id": 6, 00:26:16.298 "state": "FREE", 00:26:16.298 "validity": 0.0 00:26:16.298 }, 00:26:16.298 { 00:26:16.298 "id": 7, 00:26:16.298 "state": "FREE", 00:26:16.298 "validity": 0.0 00:26:16.298 }, 00:26:16.298 { 00:26:16.298 "id": 8, 00:26:16.298 "state": "FREE", 00:26:16.298 "validity": 0.0 00:26:16.298 }, 00:26:16.298 { 00:26:16.298 "id": 9, 00:26:16.298 "state": "FREE", 00:26:16.298 "validity": 0.0 00:26:16.298 }, 00:26:16.298 { 00:26:16.298 "id": 10, 00:26:16.298 "state": "FREE", 00:26:16.298 "validity": 0.0 00:26:16.298 }, 00:26:16.298 { 00:26:16.298 "id": 11, 00:26:16.298 "state": "FREE", 00:26:16.298 "validity": 0.0 00:26:16.298 }, 00:26:16.298 { 00:26:16.298 "id": 12, 00:26:16.298 "state": "FREE", 00:26:16.298 "validity": 0.0 00:26:16.298 }, 00:26:16.298 { 00:26:16.298 "id": 13, 00:26:16.298 "state": "FREE", 00:26:16.298 "validity": 0.0 00:26:16.298 }, 00:26:16.298 { 00:26:16.298 "id": 14, 00:26:16.298 "state": "FREE", 00:26:16.298 "validity": 0.0 00:26:16.298 }, 00:26:16.298 { 00:26:16.298 "id": 15, 00:26:16.298 "state": "FREE", 00:26:16.298 "validity": 0.0 00:26:16.298 }, 00:26:16.298 { 00:26:16.298 "id": 16, 00:26:16.298 "state": "FREE", 00:26:16.298 "validity": 0.0 00:26:16.298 }, 00:26:16.298 { 00:26:16.298 "id": 17, 00:26:16.298 "state": "FREE", 00:26:16.298 "validity": 0.0 00:26:16.298 } 00:26:16.298 ], 00:26:16.298 "read-only": true 00:26:16.298 }, 00:26:16.298 { 00:26:16.298 "name": "cache_device", 00:26:16.298 "type": "bdev", 00:26:16.298 "chunks": [ 00:26:16.298 { 00:26:16.298 "id": 0, 00:26:16.298 "state": "INACTIVE", 00:26:16.298 "utilization": 0.0 00:26:16.298 }, 00:26:16.298 { 00:26:16.298 "id": 1, 00:26:16.298 "state": "OPEN", 00:26:16.298 "utilization": 0.0 00:26:16.298 }, 00:26:16.298 { 00:26:16.298 "id": 2, 00:26:16.298 "state": "OPEN", 00:26:16.298 "utilization": 0.0 00:26:16.298 }, 00:26:16.298 { 00:26:16.298 "id": 3, 00:26:16.298 "state": "FREE", 00:26:16.298 "utilization": 0.0 00:26:16.298 }, 00:26:16.298 { 00:26:16.298 "id": 4, 00:26:16.298 "state": "FREE", 00:26:16.298 "utilization": 0.0 00:26:16.298 } 00:26:16.298 ], 00:26:16.298 "read-only": true 00:26:16.298 }, 00:26:16.298 { 00:26:16.298 "name": "verbose_mode", 00:26:16.298 "value": true, 00:26:16.298 "unit": "", 00:26:16.298 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:16.298 }, 00:26:16.298 { 00:26:16.298 "name": "prep_upgrade_on_shutdown", 00:26:16.298 "value": false, 00:26:16.298 "unit": "", 00:26:16.298 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:16.298 } 00:26:16.298 ] 00:26:16.298 } 00:26:16.298 06:13:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:26:16.298 06:13:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:16.298 06:13:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:16.556 06:13:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:26:16.556 06:13:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:26:16.556 06:13:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:26:16.556 06:13:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:16.556 06:13:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:26:16.814 Validate MD5 checksum, iteration 1 00:26:16.814 06:13:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:26:16.814 06:13:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:26:16.814 06:13:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:26:16.814 06:13:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:16.814 06:13:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:16.814 06:13:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:16.814 06:13:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:16.814 06:13:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:16.814 06:13:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:16.814 06:13:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:16.814 06:13:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:16.814 06:13:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:16.814 06:13:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:17.072 [2024-07-13 06:13:08.569604] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:26:17.072 [2024-07-13 06:13:08.569938] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96205 ] 00:26:17.072 [2024-07-13 06:13:08.712677] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.072 [2024-07-13 06:13:08.754713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.207  Copying: 490/1024 [MB] (490 MBps) Copying: 986/1024 [MB] (496 MBps) Copying: 1024/1024 [MB] (average 490 MBps) 00:26:20.207 00:26:20.207 06:13:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:20.207 06:13:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:22.182 06:13:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:22.182 06:13:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=5ef484417b79ca5f9285141834dd4b89 00:26:22.182 Validate MD5 checksum, iteration 2 00:26:22.182 06:13:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 5ef484417b79ca5f9285141834dd4b89 != \5\e\f\4\8\4\4\1\7\b\7\9\c\a\5\f\9\2\8\5\1\4\1\8\3\4\d\d\4\b\8\9 ]] 00:26:22.182 06:13:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:22.182 06:13:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:22.182 06:13:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:22.182 06:13:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:22.182 06:13:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:22.182 06:13:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:22.182 06:13:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:22.182 06:13:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:22.182 06:13:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:22.182 [2024-07-13 06:13:13.721260] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:26:22.183 [2024-07-13 06:13:13.721422] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96262 ] 00:26:22.183 [2024-07-13 06:13:13.871522] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.441 [2024-07-13 06:13:13.913280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.286  Copying: 494/1024 [MB] (494 MBps) Copying: 974/1024 [MB] (480 MBps) Copying: 1024/1024 [MB] (average 485 MBps) 00:26:27.286 00:26:27.286 06:13:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:27.286 06:13:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:29.191 06:13:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:29.191 06:13:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=8328fb0b08f0d3ca49e37b7c88881def 00:26:29.191 06:13:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 8328fb0b08f0d3ca49e37b7c88881def != \8\3\2\8\f\b\0\b\0\8\f\0\d\3\c\a\4\9\e\3\7\b\7\c\8\8\8\8\1\d\e\f ]] 00:26:29.191 06:13:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:29.191 06:13:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:29.191 06:13:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:26:29.191 06:13:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 96149 ]] 00:26:29.191 06:13:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 96149 00:26:29.191 06:13:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:26:29.191 06:13:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:26:29.191 06:13:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:29.191 06:13:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:29.191 06:13:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:29.191 06:13:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=96337 00:26:29.191 06:13:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:29.191 06:13:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 96337 00:26:29.191 06:13:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:29.191 06:13:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@829 -- # '[' -z 96337 ']' 00:26:29.191 06:13:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.191 06:13:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:29.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.191 06:13:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.191 06:13:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:29.191 06:13:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:29.191 [2024-07-13 06:13:20.761842] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:26:29.191 [2024-07-13 06:13:20.762035] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96337 ] 00:26:29.191 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 96149 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:26:29.191 [2024-07-13 06:13:20.906723] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.451 [2024-07-13 06:13:20.939439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.451 [2024-07-13 06:13:21.173740] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:29.451 [2024-07-13 06:13:21.173826] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:29.711 [2024-07-13 06:13:21.318074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.711 [2024-07-13 06:13:21.318111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:29.711 [2024-07-13 06:13:21.318128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:29.711 [2024-07-13 06:13:21.318154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.711 [2024-07-13 06:13:21.318222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.711 [2024-07-13 06:13:21.318239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:29.711 [2024-07-13 06:13:21.318249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:26:29.711 [2024-07-13 06:13:21.318257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.711 [2024-07-13 06:13:21.318287] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:29.711 [2024-07-13 06:13:21.318586] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:29.711 [2024-07-13 06:13:21.318618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.711 [2024-07-13 06:13:21.318630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:29.711 [2024-07-13 06:13:21.318641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.338 ms 00:26:29.711 [2024-07-13 06:13:21.318650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.711 [2024-07-13 06:13:21.319060] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:29.711 [2024-07-13 06:13:21.322263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.711 [2024-07-13 06:13:21.322312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:29.711 [2024-07-13 06:13:21.322327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.205 ms 00:26:29.711 [2024-07-13 06:13:21.322337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.711 [2024-07-13 06:13:21.323237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.711 [2024-07-13 06:13:21.323274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:29.711 [2024-07-13 06:13:21.323287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:26:29.711 [2024-07-13 06:13:21.323297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.711 [2024-07-13 06:13:21.323669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.711 [2024-07-13 06:13:21.323698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:29.711 [2024-07-13 06:13:21.323721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.289 ms 00:26:29.711 [2024-07-13 06:13:21.323731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.711 [2024-07-13 06:13:21.323804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.711 [2024-07-13 06:13:21.323835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:29.711 [2024-07-13 06:13:21.323846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:26:29.711 [2024-07-13 06:13:21.323859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.711 [2024-07-13 06:13:21.323894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.711 [2024-07-13 06:13:21.323909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:29.711 [2024-07-13 06:13:21.323919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:29.711 [2024-07-13 06:13:21.323928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.711 [2024-07-13 06:13:21.323958] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:29.711 [2024-07-13 06:13:21.324932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.711 [2024-07-13 06:13:21.324964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:29.711 [2024-07-13 06:13:21.324981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.977 ms 00:26:29.711 [2024-07-13 06:13:21.324990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.711 [2024-07-13 06:13:21.325023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.711 [2024-07-13 06:13:21.325045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:29.711 [2024-07-13 06:13:21.325055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:29.711 [2024-07-13 06:13:21.325064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.712 [2024-07-13 06:13:21.325138] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:29.712 [2024-07-13 06:13:21.325194] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:29.712 [2024-07-13 06:13:21.325240] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:29.712 [2024-07-13 06:13:21.325265] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:26:29.712 [2024-07-13 06:13:21.325365] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:29.712 [2024-07-13 06:13:21.325394] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:29.712 [2024-07-13 06:13:21.325407] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:26:29.712 [2024-07-13 06:13:21.325424] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:29.712 [2024-07-13 06:13:21.325437] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:29.712 [2024-07-13 06:13:21.325456] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:29.712 [2024-07-13 06:13:21.325466] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:29.712 [2024-07-13 06:13:21.325475] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:29.712 [2024-07-13 06:13:21.325488] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:29.712 [2024-07-13 06:13:21.325513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.712 [2024-07-13 06:13:21.325561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:29.712 [2024-07-13 06:13:21.325570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.391 ms 00:26:29.712 [2024-07-13 06:13:21.325586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.712 [2024-07-13 06:13:21.325665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.712 [2024-07-13 06:13:21.325688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:29.712 [2024-07-13 06:13:21.325697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:26:29.712 [2024-07-13 06:13:21.325706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.712 [2024-07-13 06:13:21.325803] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:29.712 [2024-07-13 06:13:21.325820] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:29.712 [2024-07-13 06:13:21.325833] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:29.712 [2024-07-13 06:13:21.325843] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:29.712 [2024-07-13 06:13:21.325852] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:29.712 [2024-07-13 06:13:21.325860] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:29.712 [2024-07-13 06:13:21.325868] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:29.712 [2024-07-13 06:13:21.325876] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:29.712 [2024-07-13 06:13:21.325885] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:29.712 [2024-07-13 06:13:21.325893] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:29.712 [2024-07-13 06:13:21.325901] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:29.712 [2024-07-13 06:13:21.325909] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:29.712 [2024-07-13 06:13:21.325916] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:29.712 [2024-07-13 06:13:21.325924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:29.712 [2024-07-13 06:13:21.325933] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:29.712 [2024-07-13 06:13:21.325941] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:29.712 [2024-07-13 06:13:21.325949] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:29.712 [2024-07-13 06:13:21.325956] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:29.712 [2024-07-13 06:13:21.325966] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:29.712 [2024-07-13 06:13:21.325975] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:29.712 [2024-07-13 06:13:21.325983] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:29.712 [2024-07-13 06:13:21.325991] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:29.712 [2024-07-13 06:13:21.325999] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:29.712 [2024-07-13 06:13:21.326007] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:29.712 [2024-07-13 06:13:21.326014] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:29.712 [2024-07-13 06:13:21.326022] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:29.712 [2024-07-13 06:13:21.326030] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:29.712 [2024-07-13 06:13:21.326038] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:29.712 [2024-07-13 06:13:21.326045] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:29.712 [2024-07-13 06:13:21.326053] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:29.712 [2024-07-13 06:13:21.326061] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:29.712 [2024-07-13 06:13:21.326070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:29.712 [2024-07-13 06:13:21.326078] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:29.712 [2024-07-13 06:13:21.326086] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:29.712 [2024-07-13 06:13:21.326096] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:29.712 [2024-07-13 06:13:21.326104] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:29.712 [2024-07-13 06:13:21.326112] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:29.712 [2024-07-13 06:13:21.326120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:29.712 [2024-07-13 06:13:21.326128] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:29.712 [2024-07-13 06:13:21.326136] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:29.712 [2024-07-13 06:13:21.326143] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:29.712 [2024-07-13 06:13:21.326151] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:29.712 [2024-07-13 06:13:21.326159] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:29.712 [2024-07-13 06:13:21.326182] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:29.712 [2024-07-13 06:13:21.326192] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:29.712 [2024-07-13 06:13:21.326214] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:29.712 [2024-07-13 06:13:21.326227] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:29.712 [2024-07-13 06:13:21.326252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:29.712 [2024-07-13 06:13:21.326262] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:29.712 [2024-07-13 06:13:21.326270] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:29.712 [2024-07-13 06:13:21.326284] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:29.712 [2024-07-13 06:13:21.326294] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:29.712 [2024-07-13 06:13:21.326302] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:29.712 [2024-07-13 06:13:21.326313] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:29.712 [2024-07-13 06:13:21.326334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:29.712 [2024-07-13 06:13:21.326345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:29.712 [2024-07-13 06:13:21.326355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:29.712 [2024-07-13 06:13:21.326364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:29.712 [2024-07-13 06:13:21.326373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:29.712 [2024-07-13 06:13:21.326382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:29.712 [2024-07-13 06:13:21.326392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:29.712 [2024-07-13 06:13:21.326401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:29.712 [2024-07-13 06:13:21.326410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:29.712 [2024-07-13 06:13:21.326420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:29.712 [2024-07-13 06:13:21.326429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:29.712 [2024-07-13 06:13:21.326439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:29.712 [2024-07-13 06:13:21.326451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:29.712 [2024-07-13 06:13:21.326461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:29.712 [2024-07-13 06:13:21.326471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:29.712 [2024-07-13 06:13:21.326480] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:29.712 [2024-07-13 06:13:21.326493] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:29.712 [2024-07-13 06:13:21.326504] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:29.712 [2024-07-13 06:13:21.326514] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:29.712 [2024-07-13 06:13:21.326523] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:29.712 [2024-07-13 06:13:21.326533] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:29.712 [2024-07-13 06:13:21.326543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.712 [2024-07-13 06:13:21.326569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:29.712 [2024-07-13 06:13:21.326593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.792 ms 00:26:29.712 [2024-07-13 06:13:21.326609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.712 [2024-07-13 06:13:21.332875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.712 [2024-07-13 06:13:21.332904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:29.712 [2024-07-13 06:13:21.332918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.208 ms 00:26:29.712 [2024-07-13 06:13:21.332927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.712 [2024-07-13 06:13:21.332980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.713 [2024-07-13 06:13:21.333021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:29.713 [2024-07-13 06:13:21.333032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:29.713 [2024-07-13 06:13:21.333041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.713 [2024-07-13 06:13:21.342219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.713 [2024-07-13 06:13:21.342261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:29.713 [2024-07-13 06:13:21.342279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.086 ms 00:26:29.713 [2024-07-13 06:13:21.342291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.713 [2024-07-13 06:13:21.342367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.713 [2024-07-13 06:13:21.342386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:29.713 [2024-07-13 06:13:21.342399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:26:29.713 [2024-07-13 06:13:21.342411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.713 [2024-07-13 06:13:21.342558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.713 [2024-07-13 06:13:21.342587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:29.713 [2024-07-13 06:13:21.342601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:26:29.713 [2024-07-13 06:13:21.342632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.713 [2024-07-13 06:13:21.342706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.713 [2024-07-13 06:13:21.342723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:29.713 [2024-07-13 06:13:21.342735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:26:29.713 [2024-07-13 06:13:21.342759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.713 [2024-07-13 06:13:21.348778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.713 [2024-07-13 06:13:21.348830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:29.713 [2024-07-13 06:13:21.348843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.987 ms 00:26:29.713 [2024-07-13 06:13:21.348852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.713 [2024-07-13 06:13:21.348950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.713 [2024-07-13 06:13:21.348995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:26:29.713 [2024-07-13 06:13:21.349039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:29.713 [2024-07-13 06:13:21.349049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.713 [2024-07-13 06:13:21.360422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.713 [2024-07-13 06:13:21.360457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:26:29.713 [2024-07-13 06:13:21.360474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.349 ms 00:26:29.713 [2024-07-13 06:13:21.360490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.713 [2024-07-13 06:13:21.361740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.713 [2024-07-13 06:13:21.361773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:29.713 [2024-07-13 06:13:21.361786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.263 ms 00:26:29.713 [2024-07-13 06:13:21.361795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.713 [2024-07-13 06:13:21.376753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.713 [2024-07-13 06:13:21.376830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:29.713 [2024-07-13 06:13:21.376849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.914 ms 00:26:29.713 [2024-07-13 06:13:21.376860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.713 [2024-07-13 06:13:21.377030] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:26:29.713 [2024-07-13 06:13:21.377178] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:26:29.713 [2024-07-13 06:13:21.377281] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:26:29.713 [2024-07-13 06:13:21.377373] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:26:29.713 [2024-07-13 06:13:21.377390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.713 [2024-07-13 06:13:21.377402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:26:29.713 [2024-07-13 06:13:21.377413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.457 ms 00:26:29.713 [2024-07-13 06:13:21.377436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.713 [2024-07-13 06:13:21.377512] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:26:29.713 [2024-07-13 06:13:21.377530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.713 [2024-07-13 06:13:21.377541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:26:29.713 [2024-07-13 06:13:21.377551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:26:29.713 [2024-07-13 06:13:21.377574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.713 [2024-07-13 06:13:21.380340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.713 [2024-07-13 06:13:21.380378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:26:29.713 [2024-07-13 06:13:21.380393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.711 ms 00:26:29.713 [2024-07-13 06:13:21.380407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.713 [2024-07-13 06:13:21.381169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:29.713 [2024-07-13 06:13:21.381215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:26:29.713 [2024-07-13 06:13:21.381240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:26:29.713 [2024-07-13 06:13:21.381251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:29.713 [2024-07-13 06:13:21.381515] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:26:30.282 [2024-07-13 06:13:21.946450] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:26:30.282 [2024-07-13 06:13:21.946646] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:26:30.851 [2024-07-13 06:13:22.501981] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:26:30.851 [2024-07-13 06:13:22.502154] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:30.851 [2024-07-13 06:13:22.502205] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:30.851 [2024-07-13 06:13:22.502249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.851 [2024-07-13 06:13:22.502273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:26:30.851 [2024-07-13 06:13:22.502287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1120.914 ms 00:26:30.851 [2024-07-13 06:13:22.502315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.851 [2024-07-13 06:13:22.502359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.851 [2024-07-13 06:13:22.502383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:26:30.851 [2024-07-13 06:13:22.502399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:30.851 [2024-07-13 06:13:22.502415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.851 [2024-07-13 06:13:22.509378] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:30.851 [2024-07-13 06:13:22.509549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.851 [2024-07-13 06:13:22.509565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:30.851 [2024-07-13 06:13:22.509575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.113 ms 00:26:30.851 [2024-07-13 06:13:22.509584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.851 [2024-07-13 06:13:22.510270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.851 [2024-07-13 06:13:22.510294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:26:30.851 [2024-07-13 06:13:22.510306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.559 ms 00:26:30.851 [2024-07-13 06:13:22.510316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.851 [2024-07-13 06:13:22.512365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.851 [2024-07-13 06:13:22.512388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:26:30.851 [2024-07-13 06:13:22.512399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.011 ms 00:26:30.851 [2024-07-13 06:13:22.512408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.851 [2024-07-13 06:13:22.512454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.851 [2024-07-13 06:13:22.512468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:26:30.851 [2024-07-13 06:13:22.512478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:30.851 [2024-07-13 06:13:22.512496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.851 [2024-07-13 06:13:22.512655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.851 [2024-07-13 06:13:22.512671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:30.851 [2024-07-13 06:13:22.512691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:26:30.851 [2024-07-13 06:13:22.512701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.851 [2024-07-13 06:13:22.512727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.851 [2024-07-13 06:13:22.512744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:30.851 [2024-07-13 06:13:22.512754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:30.851 [2024-07-13 06:13:22.512763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.852 [2024-07-13 06:13:22.512799] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:30.852 [2024-07-13 06:13:22.512815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.852 [2024-07-13 06:13:22.512840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:30.852 [2024-07-13 06:13:22.512851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:26:30.852 [2024-07-13 06:13:22.512860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.852 [2024-07-13 06:13:22.512924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.852 [2024-07-13 06:13:22.512939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:30.852 [2024-07-13 06:13:22.512965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:26:30.852 [2024-07-13 06:13:22.512975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.852 [2024-07-13 06:13:22.514323] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1195.730 ms, result 0 00:26:30.852 [2024-07-13 06:13:22.529910] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:30.852 [2024-07-13 06:13:22.545908] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:30.852 [2024-07-13 06:13:22.554007] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:31.112 06:13:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:31.112 06:13:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # return 0 00:26:31.112 06:13:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:31.112 06:13:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:31.112 06:13:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:26:31.112 06:13:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:31.112 06:13:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:31.112 06:13:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:31.112 Validate MD5 checksum, iteration 1 00:26:31.112 06:13:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:31.112 06:13:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:31.112 06:13:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:31.112 06:13:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:31.112 06:13:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:31.112 06:13:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:31.112 06:13:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:31.112 [2024-07-13 06:13:22.680396] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:26:31.112 [2024-07-13 06:13:22.680583] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96354 ] 00:26:31.112 [2024-07-13 06:13:22.832370] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.370 [2024-07-13 06:13:22.875716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.056  Copying: 488/1024 [MB] (488 MBps) Copying: 970/1024 [MB] (482 MBps) Copying: 1024/1024 [MB] (average 484 MBps) 00:26:35.056 00:26:35.056 06:13:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:35.056 06:13:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:36.962 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:36.962 Validate MD5 checksum, iteration 2 00:26:36.962 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=5ef484417b79ca5f9285141834dd4b89 00:26:36.962 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 5ef484417b79ca5f9285141834dd4b89 != \5\e\f\4\8\4\4\1\7\b\7\9\c\a\5\f\9\2\8\5\1\4\1\8\3\4\d\d\4\b\8\9 ]] 00:26:36.962 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:36.962 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:36.962 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:36.962 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:36.962 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:36.962 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:36.962 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:36.963 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:36.963 06:13:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:36.963 [2024-07-13 06:13:28.674638] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:26:36.963 [2024-07-13 06:13:28.674807] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96423 ] 00:26:37.222 [2024-07-13 06:13:28.824210] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.222 [2024-07-13 06:13:28.866396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.169  Copying: 496/1024 [MB] (496 MBps) Copying: 973/1024 [MB] (477 MBps) Copying: 1024/1024 [MB] (average 486 MBps) 00:26:41.169 00:26:41.169 06:13:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:41.169 06:13:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:43.087 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:43.087 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=8328fb0b08f0d3ca49e37b7c88881def 00:26:43.087 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 8328fb0b08f0d3ca49e37b7c88881def != \8\3\2\8\f\b\0\b\0\8\f\0\d\3\c\a\4\9\e\3\7\b\7\c\8\8\8\8\1\d\e\f ]] 00:26:43.087 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:43.087 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:43.087 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:26:43.087 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:26:43.087 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:26:43.087 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:43.087 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:26:43.087 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:26:43.087 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:26:43.087 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:26:43.087 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 96337 ]] 00:26:43.087 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 96337 00:26:43.087 06:13:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@948 -- # '[' -z 96337 ']' 00:26:43.087 06:13:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # kill -0 96337 00:26:43.087 06:13:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # uname 00:26:43.087 06:13:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:43.087 06:13:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96337 00:26:43.087 killing process with pid 96337 00:26:43.087 06:13:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:43.087 06:13:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:43.087 06:13:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96337' 00:26:43.087 06:13:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@967 -- # kill 96337 00:26:43.087 06:13:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # wait 96337 00:26:43.087 [2024-07-13 06:13:34.667641] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:26:43.087 [2024-07-13 06:13:34.672597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:43.087 [2024-07-13 06:13:34.672642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:43.087 [2024-07-13 06:13:34.672660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:43.087 [2024-07-13 06:13:34.672670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.087 [2024-07-13 06:13:34.672698] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:43.087 [2024-07-13 06:13:34.673182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:43.087 [2024-07-13 06:13:34.673201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:43.087 [2024-07-13 06:13:34.673214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.466 ms 00:26:43.087 [2024-07-13 06:13:34.673224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.087 [2024-07-13 06:13:34.673484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:43.087 [2024-07-13 06:13:34.673545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:43.087 [2024-07-13 06:13:34.673556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.235 ms 00:26:43.087 [2024-07-13 06:13:34.673570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.087 [2024-07-13 06:13:34.674672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:43.087 [2024-07-13 06:13:34.674731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:43.087 [2024-07-13 06:13:34.674755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.080 ms 00:26:43.087 [2024-07-13 06:13:34.674772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.087 [2024-07-13 06:13:34.675914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:43.087 [2024-07-13 06:13:34.675935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:26:43.087 [2024-07-13 06:13:34.675946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.084 ms 00:26:43.087 [2024-07-13 06:13:34.675956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.087 [2024-07-13 06:13:34.677507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:43.087 [2024-07-13 06:13:34.677588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:43.087 [2024-07-13 06:13:34.677818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.469 ms 00:26:43.087 [2024-07-13 06:13:34.677867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.087 [2024-07-13 06:13:34.679067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:43.087 [2024-07-13 06:13:34.679304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:43.087 [2024-07-13 06:13:34.679412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.127 ms 00:26:43.087 [2024-07-13 06:13:34.679499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.087 [2024-07-13 06:13:34.679653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:43.088 [2024-07-13 06:13:34.679855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:43.088 [2024-07-13 06:13:34.679906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:26:43.088 [2024-07-13 06:13:34.679960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.088 [2024-07-13 06:13:34.681370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:43.088 [2024-07-13 06:13:34.681590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:26:43.088 [2024-07-13 06:13:34.681809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.278 ms 00:26:43.088 [2024-07-13 06:13:34.681876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.088 [2024-07-13 06:13:34.683373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:43.088 [2024-07-13 06:13:34.683570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:26:43.088 [2024-07-13 06:13:34.683677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.391 ms 00:26:43.088 [2024-07-13 06:13:34.683743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.088 [2024-07-13 06:13:34.685199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:43.088 [2024-07-13 06:13:34.685340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:43.088 [2024-07-13 06:13:34.685489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.270 ms 00:26:43.088 [2024-07-13 06:13:34.685629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.088 [2024-07-13 06:13:34.686829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:43.088 [2024-07-13 06:13:34.687041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:43.088 [2024-07-13 06:13:34.687176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.086 ms 00:26:43.088 [2024-07-13 06:13:34.687242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.088 [2024-07-13 06:13:34.687314] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:43.088 [2024-07-13 06:13:34.687396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:43.088 [2024-07-13 06:13:34.687616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:43.088 [2024-07-13 06:13:34.687648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:43.088 [2024-07-13 06:13:34.687659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:43.088 [2024-07-13 06:13:34.687670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:43.088 [2024-07-13 06:13:34.687680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:43.088 [2024-07-13 06:13:34.687707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:43.088 [2024-07-13 06:13:34.687718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:43.088 [2024-07-13 06:13:34.687728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:43.088 [2024-07-13 06:13:34.687738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:43.088 [2024-07-13 06:13:34.687749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:43.088 [2024-07-13 06:13:34.687759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:43.088 [2024-07-13 06:13:34.687769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:43.088 [2024-07-13 06:13:34.687780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:43.088 [2024-07-13 06:13:34.687790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:43.088 [2024-07-13 06:13:34.687800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:43.088 [2024-07-13 06:13:34.687810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:43.088 [2024-07-13 06:13:34.687821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:43.088 [2024-07-13 06:13:34.687833] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:43.088 [2024-07-13 06:13:34.687843] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: abfae991-9d63-4260-a24f-7fe068285cb3 00:26:43.088 [2024-07-13 06:13:34.687854] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:43.088 [2024-07-13 06:13:34.687869] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:26:43.088 [2024-07-13 06:13:34.687878] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:26:43.088 [2024-07-13 06:13:34.687888] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:26:43.088 [2024-07-13 06:13:34.687898] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:43.088 [2024-07-13 06:13:34.687908] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:43.088 [2024-07-13 06:13:34.687927] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:43.088 [2024-07-13 06:13:34.687936] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:43.088 [2024-07-13 06:13:34.687946] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:43.088 [2024-07-13 06:13:34.687957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:43.088 [2024-07-13 06:13:34.687967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:43.088 [2024-07-13 06:13:34.687979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.645 ms 00:26:43.088 [2024-07-13 06:13:34.688014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.088 [2024-07-13 06:13:34.689518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:43.088 [2024-07-13 06:13:34.689574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:43.088 [2024-07-13 06:13:34.689595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.465 ms 00:26:43.088 [2024-07-13 06:13:34.689605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.088 [2024-07-13 06:13:34.689716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:43.088 [2024-07-13 06:13:34.689731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:43.088 [2024-07-13 06:13:34.689742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:26:43.088 [2024-07-13 06:13:34.689751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.088 [2024-07-13 06:13:34.695713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:43.088 [2024-07-13 06:13:34.695755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:43.088 [2024-07-13 06:13:34.695772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:43.088 [2024-07-13 06:13:34.695784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.088 [2024-07-13 06:13:34.695825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:43.088 [2024-07-13 06:13:34.695841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:43.088 [2024-07-13 06:13:34.695853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:43.088 [2024-07-13 06:13:34.695865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.088 [2024-07-13 06:13:34.695956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:43.088 [2024-07-13 06:13:34.695978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:43.088 [2024-07-13 06:13:34.695991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:43.088 [2024-07-13 06:13:34.696001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.088 [2024-07-13 06:13:34.696029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:43.088 [2024-07-13 06:13:34.696043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:43.088 [2024-07-13 06:13:34.696055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:43.088 [2024-07-13 06:13:34.696066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.088 [2024-07-13 06:13:34.703831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:43.088 [2024-07-13 06:13:34.703880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:43.088 [2024-07-13 06:13:34.703895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:43.088 [2024-07-13 06:13:34.703905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.088 [2024-07-13 06:13:34.710098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:43.088 [2024-07-13 06:13:34.710202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:43.088 [2024-07-13 06:13:34.710220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:43.088 [2024-07-13 06:13:34.710241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.088 [2024-07-13 06:13:34.710338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:43.088 [2024-07-13 06:13:34.710375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:43.088 [2024-07-13 06:13:34.710386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:43.088 [2024-07-13 06:13:34.710396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.088 [2024-07-13 06:13:34.710446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:43.088 [2024-07-13 06:13:34.710461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:43.088 [2024-07-13 06:13:34.710472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:43.088 [2024-07-13 06:13:34.710481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.088 [2024-07-13 06:13:34.710579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:43.088 [2024-07-13 06:13:34.710611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:43.088 [2024-07-13 06:13:34.710628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:43.088 [2024-07-13 06:13:34.710638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.088 [2024-07-13 06:13:34.710684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:43.088 [2024-07-13 06:13:34.710701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:43.088 [2024-07-13 06:13:34.710712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:43.088 [2024-07-13 06:13:34.710721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.088 [2024-07-13 06:13:34.710765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:43.088 [2024-07-13 06:13:34.710780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:43.088 [2024-07-13 06:13:34.710795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:43.088 [2024-07-13 06:13:34.710819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.088 [2024-07-13 06:13:34.710877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:43.088 [2024-07-13 06:13:34.710901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:43.088 [2024-07-13 06:13:34.710912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:43.088 [2024-07-13 06:13:34.710921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:43.088 [2024-07-13 06:13:34.711055] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 38.421 ms, result 0 00:26:43.347 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:43.347 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:43.347 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:26:43.347 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:26:43.347 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:26:43.347 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:43.347 Remove shared memory files 00:26:43.347 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:26:43.347 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:43.347 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:43.347 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:43.347 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid96149 00:26:43.347 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:43.347 06:13:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:43.347 ************************************ 00:26:43.347 END TEST ftl_upgrade_shutdown 00:26:43.347 ************************************ 00:26:43.347 00:26:43.347 real 1m12.582s 00:26:43.347 user 1m39.738s 00:26:43.347 sys 0m21.082s 00:26:43.347 06:13:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:43.347 06:13:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:43.347 06:13:34 ftl -- common/autotest_common.sh@1142 -- # return 0 00:26:43.347 06:13:34 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:26:43.347 06:13:34 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:26:43.347 06:13:34 ftl -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:26:43.347 06:13:34 ftl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:43.347 06:13:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:43.347 ************************************ 00:26:43.347 START TEST ftl_restore_fast 00:26:43.347 ************************************ 00:26:43.347 06:13:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:26:43.347 * Looking for test storage... 00:26:43.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.glv16ZCQUs 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=96561 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 96561 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- common/autotest_common.sh@829 -- # '[' -z 96561 ']' 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:43.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:43.347 06:13:35 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:26:43.605 [2024-07-13 06:13:35.181591] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:26:43.605 [2024-07-13 06:13:35.181779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96561 ] 00:26:43.863 [2024-07-13 06:13:35.332634] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.863 [2024-07-13 06:13:35.377329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.430 06:13:36 ftl.ftl_restore_fast -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:44.430 06:13:36 ftl.ftl_restore_fast -- common/autotest_common.sh@862 -- # return 0 00:26:44.430 06:13:36 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:44.430 06:13:36 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:26:44.430 06:13:36 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:44.430 06:13:36 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:26:44.430 06:13:36 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:26:44.430 06:13:36 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:44.687 06:13:36 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:44.687 06:13:36 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:26:44.687 06:13:36 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:44.687 06:13:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:26:44.687 06:13:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:44.687 06:13:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:26:44.687 06:13:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:26:44.687 06:13:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:44.944 06:13:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:44.944 { 00:26:44.944 "name": "nvme0n1", 00:26:44.944 "aliases": [ 00:26:44.944 "8815807a-8568-4d30-ad00-dbba67409c00" 00:26:44.944 ], 00:26:44.944 "product_name": "NVMe disk", 00:26:44.944 "block_size": 4096, 00:26:44.944 "num_blocks": 1310720, 00:26:44.944 "uuid": "8815807a-8568-4d30-ad00-dbba67409c00", 00:26:44.944 "assigned_rate_limits": { 00:26:44.944 "rw_ios_per_sec": 0, 00:26:44.944 "rw_mbytes_per_sec": 0, 00:26:44.944 "r_mbytes_per_sec": 0, 00:26:44.944 "w_mbytes_per_sec": 0 00:26:44.944 }, 00:26:44.944 "claimed": true, 00:26:44.944 "claim_type": "read_many_write_one", 00:26:44.944 "zoned": false, 00:26:44.944 "supported_io_types": { 00:26:44.944 "read": true, 00:26:44.944 "write": true, 00:26:44.944 "unmap": true, 00:26:44.944 "flush": true, 00:26:44.944 "reset": true, 00:26:44.944 "nvme_admin": true, 00:26:44.944 "nvme_io": true, 00:26:44.944 "nvme_io_md": false, 00:26:44.944 "write_zeroes": true, 00:26:44.944 "zcopy": false, 00:26:44.944 "get_zone_info": false, 00:26:44.944 "zone_management": false, 00:26:44.944 "zone_append": false, 00:26:44.944 "compare": true, 00:26:44.944 "compare_and_write": false, 00:26:44.944 "abort": true, 00:26:44.944 "seek_hole": false, 00:26:44.944 "seek_data": false, 00:26:44.944 "copy": true, 00:26:44.944 "nvme_iov_md": false 00:26:44.944 }, 00:26:44.944 "driver_specific": { 00:26:44.944 "nvme": [ 00:26:44.944 { 00:26:44.944 "pci_address": "0000:00:11.0", 00:26:44.944 "trid": { 00:26:44.944 "trtype": "PCIe", 00:26:44.944 "traddr": "0000:00:11.0" 00:26:44.944 }, 00:26:44.944 "ctrlr_data": { 00:26:44.944 "cntlid": 0, 00:26:44.944 "vendor_id": "0x1b36", 00:26:44.944 "model_number": "QEMU NVMe Ctrl", 00:26:44.944 "serial_number": "12341", 00:26:44.944 "firmware_revision": "8.0.0", 00:26:44.944 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:44.944 "oacs": { 00:26:44.944 "security": 0, 00:26:44.944 "format": 1, 00:26:44.944 "firmware": 0, 00:26:44.944 "ns_manage": 1 00:26:44.944 }, 00:26:44.944 "multi_ctrlr": false, 00:26:44.944 "ana_reporting": false 00:26:44.944 }, 00:26:44.945 "vs": { 00:26:44.945 "nvme_version": "1.4" 00:26:44.945 }, 00:26:44.945 "ns_data": { 00:26:44.945 "id": 1, 00:26:44.945 "can_share": false 00:26:44.945 } 00:26:44.945 } 00:26:44.945 ], 00:26:44.945 "mp_policy": "active_passive" 00:26:44.945 } 00:26:44.945 } 00:26:44.945 ]' 00:26:44.945 06:13:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:44.945 06:13:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:26:44.945 06:13:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:44.945 06:13:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:26:44.945 06:13:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:26:44.945 06:13:36 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:26:44.945 06:13:36 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:26:44.945 06:13:36 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:44.945 06:13:36 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:26:44.945 06:13:36 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:44.945 06:13:36 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:45.202 06:13:36 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=45231e4b-3d82-499b-b18b-52a30b4efec1 00:26:45.202 06:13:36 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:26:45.202 06:13:36 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 45231e4b-3d82-499b-b18b-52a30b4efec1 00:26:45.460 06:13:37 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:45.718 06:13:37 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=dd93c6ae-d78a-409e-8412-e1ad8618052c 00:26:45.718 06:13:37 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u dd93c6ae-d78a-409e-8412-e1ad8618052c 00:26:45.976 06:13:37 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=07a9c101-cecf-481f-957a-bf07c46e5fc6 00:26:45.976 06:13:37 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:26:45.976 06:13:37 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 07a9c101-cecf-481f-957a-bf07c46e5fc6 00:26:45.976 06:13:37 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:26:45.976 06:13:37 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:45.976 06:13:37 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=07a9c101-cecf-481f-957a-bf07c46e5fc6 00:26:45.976 06:13:37 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:26:45.976 06:13:37 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 07a9c101-cecf-481f-957a-bf07c46e5fc6 00:26:45.976 06:13:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=07a9c101-cecf-481f-957a-bf07c46e5fc6 00:26:45.976 06:13:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:45.976 06:13:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:26:45.976 06:13:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:26:45.976 06:13:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 07a9c101-cecf-481f-957a-bf07c46e5fc6 00:26:46.235 06:13:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:46.236 { 00:26:46.236 "name": "07a9c101-cecf-481f-957a-bf07c46e5fc6", 00:26:46.236 "aliases": [ 00:26:46.236 "lvs/nvme0n1p0" 00:26:46.236 ], 00:26:46.236 "product_name": "Logical Volume", 00:26:46.236 "block_size": 4096, 00:26:46.236 "num_blocks": 26476544, 00:26:46.236 "uuid": "07a9c101-cecf-481f-957a-bf07c46e5fc6", 00:26:46.236 "assigned_rate_limits": { 00:26:46.236 "rw_ios_per_sec": 0, 00:26:46.236 "rw_mbytes_per_sec": 0, 00:26:46.236 "r_mbytes_per_sec": 0, 00:26:46.236 "w_mbytes_per_sec": 0 00:26:46.236 }, 00:26:46.236 "claimed": false, 00:26:46.236 "zoned": false, 00:26:46.236 "supported_io_types": { 00:26:46.236 "read": true, 00:26:46.236 "write": true, 00:26:46.236 "unmap": true, 00:26:46.236 "flush": false, 00:26:46.236 "reset": true, 00:26:46.236 "nvme_admin": false, 00:26:46.236 "nvme_io": false, 00:26:46.236 "nvme_io_md": false, 00:26:46.236 "write_zeroes": true, 00:26:46.236 "zcopy": false, 00:26:46.236 "get_zone_info": false, 00:26:46.236 "zone_management": false, 00:26:46.236 "zone_append": false, 00:26:46.236 "compare": false, 00:26:46.236 "compare_and_write": false, 00:26:46.236 "abort": false, 00:26:46.236 "seek_hole": true, 00:26:46.236 "seek_data": true, 00:26:46.236 "copy": false, 00:26:46.236 "nvme_iov_md": false 00:26:46.236 }, 00:26:46.236 "driver_specific": { 00:26:46.236 "lvol": { 00:26:46.236 "lvol_store_uuid": "dd93c6ae-d78a-409e-8412-e1ad8618052c", 00:26:46.236 "base_bdev": "nvme0n1", 00:26:46.236 "thin_provision": true, 00:26:46.236 "num_allocated_clusters": 0, 00:26:46.236 "snapshot": false, 00:26:46.236 "clone": false, 00:26:46.236 "esnap_clone": false 00:26:46.236 } 00:26:46.236 } 00:26:46.236 } 00:26:46.236 ]' 00:26:46.236 06:13:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:46.236 06:13:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:26:46.236 06:13:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:46.236 06:13:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:26:46.236 06:13:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:26:46.236 06:13:37 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:26:46.236 06:13:37 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:26:46.236 06:13:37 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:26:46.236 06:13:37 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:46.495 06:13:38 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:46.495 06:13:38 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:46.495 06:13:38 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 07a9c101-cecf-481f-957a-bf07c46e5fc6 00:26:46.495 06:13:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=07a9c101-cecf-481f-957a-bf07c46e5fc6 00:26:46.495 06:13:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:46.495 06:13:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:26:46.495 06:13:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:26:46.495 06:13:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 07a9c101-cecf-481f-957a-bf07c46e5fc6 00:26:46.754 06:13:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:46.754 { 00:26:46.754 "name": "07a9c101-cecf-481f-957a-bf07c46e5fc6", 00:26:46.754 "aliases": [ 00:26:46.754 "lvs/nvme0n1p0" 00:26:46.754 ], 00:26:46.754 "product_name": "Logical Volume", 00:26:46.754 "block_size": 4096, 00:26:46.754 "num_blocks": 26476544, 00:26:46.754 "uuid": "07a9c101-cecf-481f-957a-bf07c46e5fc6", 00:26:46.754 "assigned_rate_limits": { 00:26:46.754 "rw_ios_per_sec": 0, 00:26:46.754 "rw_mbytes_per_sec": 0, 00:26:46.754 "r_mbytes_per_sec": 0, 00:26:46.754 "w_mbytes_per_sec": 0 00:26:46.754 }, 00:26:46.754 "claimed": false, 00:26:46.754 "zoned": false, 00:26:46.754 "supported_io_types": { 00:26:46.754 "read": true, 00:26:46.754 "write": true, 00:26:46.754 "unmap": true, 00:26:46.754 "flush": false, 00:26:46.754 "reset": true, 00:26:46.754 "nvme_admin": false, 00:26:46.754 "nvme_io": false, 00:26:46.754 "nvme_io_md": false, 00:26:46.754 "write_zeroes": true, 00:26:46.754 "zcopy": false, 00:26:46.754 "get_zone_info": false, 00:26:46.754 "zone_management": false, 00:26:46.754 "zone_append": false, 00:26:46.754 "compare": false, 00:26:46.754 "compare_and_write": false, 00:26:46.754 "abort": false, 00:26:46.754 "seek_hole": true, 00:26:46.754 "seek_data": true, 00:26:46.754 "copy": false, 00:26:46.754 "nvme_iov_md": false 00:26:46.754 }, 00:26:46.754 "driver_specific": { 00:26:46.754 "lvol": { 00:26:46.754 "lvol_store_uuid": "dd93c6ae-d78a-409e-8412-e1ad8618052c", 00:26:46.754 "base_bdev": "nvme0n1", 00:26:46.754 "thin_provision": true, 00:26:46.754 "num_allocated_clusters": 0, 00:26:46.754 "snapshot": false, 00:26:46.754 "clone": false, 00:26:46.754 "esnap_clone": false 00:26:46.754 } 00:26:46.754 } 00:26:46.754 } 00:26:46.754 ]' 00:26:46.754 06:13:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:46.754 06:13:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:26:46.754 06:13:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:46.754 06:13:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:26:46.754 06:13:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:26:46.754 06:13:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:26:46.755 06:13:38 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:26:46.755 06:13:38 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:47.014 06:13:38 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:26:47.014 06:13:38 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 07a9c101-cecf-481f-957a-bf07c46e5fc6 00:26:47.014 06:13:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=07a9c101-cecf-481f-957a-bf07c46e5fc6 00:26:47.014 06:13:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:47.014 06:13:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:26:47.014 06:13:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:26:47.014 06:13:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 07a9c101-cecf-481f-957a-bf07c46e5fc6 00:26:47.273 06:13:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:47.273 { 00:26:47.273 "name": "07a9c101-cecf-481f-957a-bf07c46e5fc6", 00:26:47.273 "aliases": [ 00:26:47.273 "lvs/nvme0n1p0" 00:26:47.273 ], 00:26:47.273 "product_name": "Logical Volume", 00:26:47.273 "block_size": 4096, 00:26:47.273 "num_blocks": 26476544, 00:26:47.273 "uuid": "07a9c101-cecf-481f-957a-bf07c46e5fc6", 00:26:47.273 "assigned_rate_limits": { 00:26:47.273 "rw_ios_per_sec": 0, 00:26:47.273 "rw_mbytes_per_sec": 0, 00:26:47.273 "r_mbytes_per_sec": 0, 00:26:47.273 "w_mbytes_per_sec": 0 00:26:47.273 }, 00:26:47.273 "claimed": false, 00:26:47.273 "zoned": false, 00:26:47.273 "supported_io_types": { 00:26:47.273 "read": true, 00:26:47.273 "write": true, 00:26:47.273 "unmap": true, 00:26:47.273 "flush": false, 00:26:47.273 "reset": true, 00:26:47.273 "nvme_admin": false, 00:26:47.273 "nvme_io": false, 00:26:47.273 "nvme_io_md": false, 00:26:47.273 "write_zeroes": true, 00:26:47.273 "zcopy": false, 00:26:47.273 "get_zone_info": false, 00:26:47.273 "zone_management": false, 00:26:47.273 "zone_append": false, 00:26:47.273 "compare": false, 00:26:47.273 "compare_and_write": false, 00:26:47.273 "abort": false, 00:26:47.273 "seek_hole": true, 00:26:47.273 "seek_data": true, 00:26:47.273 "copy": false, 00:26:47.273 "nvme_iov_md": false 00:26:47.273 }, 00:26:47.273 "driver_specific": { 00:26:47.273 "lvol": { 00:26:47.273 "lvol_store_uuid": "dd93c6ae-d78a-409e-8412-e1ad8618052c", 00:26:47.273 "base_bdev": "nvme0n1", 00:26:47.273 "thin_provision": true, 00:26:47.273 "num_allocated_clusters": 0, 00:26:47.273 "snapshot": false, 00:26:47.273 "clone": false, 00:26:47.273 "esnap_clone": false 00:26:47.273 } 00:26:47.273 } 00:26:47.273 } 00:26:47.273 ]' 00:26:47.273 06:13:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:47.273 06:13:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:26:47.273 06:13:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:47.273 06:13:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:26:47.273 06:13:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:26:47.273 06:13:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:26:47.273 06:13:38 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:26:47.273 06:13:38 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 07a9c101-cecf-481f-957a-bf07c46e5fc6 --l2p_dram_limit 10' 00:26:47.273 06:13:38 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:26:47.273 06:13:38 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:26:47.273 06:13:38 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:47.273 06:13:38 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:26:47.273 06:13:38 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:26:47.273 06:13:38 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 07a9c101-cecf-481f-957a-bf07c46e5fc6 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:26:47.533 [2024-07-13 06:13:39.156707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.533 [2024-07-13 06:13:39.156754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:47.533 [2024-07-13 06:13:39.156790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:47.533 [2024-07-13 06:13:39.156800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.533 [2024-07-13 06:13:39.156873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.533 [2024-07-13 06:13:39.156892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:47.533 [2024-07-13 06:13:39.156908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:26:47.533 [2024-07-13 06:13:39.156918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.533 [2024-07-13 06:13:39.156953] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:47.533 [2024-07-13 06:13:39.157311] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:47.533 [2024-07-13 06:13:39.157342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.533 [2024-07-13 06:13:39.157353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:47.533 [2024-07-13 06:13:39.157376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:26:47.533 [2024-07-13 06:13:39.157393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.533 [2024-07-13 06:13:39.157701] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2bf28cc8-5557-4d51-be79-31ccfa58f7c9 00:26:47.533 [2024-07-13 06:13:39.158841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.533 [2024-07-13 06:13:39.158884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:47.533 [2024-07-13 06:13:39.158900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:47.533 [2024-07-13 06:13:39.158921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.533 [2024-07-13 06:13:39.162990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.533 [2024-07-13 06:13:39.163060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:47.533 [2024-07-13 06:13:39.163075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.019 ms 00:26:47.533 [2024-07-13 06:13:39.163087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.533 [2024-07-13 06:13:39.163212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.533 [2024-07-13 06:13:39.163239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:47.533 [2024-07-13 06:13:39.163251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:26:47.533 [2024-07-13 06:13:39.163263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.533 [2024-07-13 06:13:39.163367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.533 [2024-07-13 06:13:39.163385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:47.533 [2024-07-13 06:13:39.163397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:26:47.533 [2024-07-13 06:13:39.163409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.533 [2024-07-13 06:13:39.163439] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:47.533 [2024-07-13 06:13:39.164807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.533 [2024-07-13 06:13:39.164854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:47.533 [2024-07-13 06:13:39.164885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.373 ms 00:26:47.533 [2024-07-13 06:13:39.164896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.533 [2024-07-13 06:13:39.164937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.533 [2024-07-13 06:13:39.164952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:47.533 [2024-07-13 06:13:39.164965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:47.533 [2024-07-13 06:13:39.164975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.533 [2024-07-13 06:13:39.165001] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:47.534 [2024-07-13 06:13:39.165194] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:47.534 [2024-07-13 06:13:39.165217] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:47.534 [2024-07-13 06:13:39.165240] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:47.534 [2024-07-13 06:13:39.165259] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:47.534 [2024-07-13 06:13:39.165272] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:47.534 [2024-07-13 06:13:39.165285] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:47.534 [2024-07-13 06:13:39.165298] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:47.534 [2024-07-13 06:13:39.165309] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:47.534 [2024-07-13 06:13:39.165320] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:47.534 [2024-07-13 06:13:39.165333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.534 [2024-07-13 06:13:39.165343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:47.534 [2024-07-13 06:13:39.165356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:26:47.534 [2024-07-13 06:13:39.165366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.534 [2024-07-13 06:13:39.165480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.534 [2024-07-13 06:13:39.165509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:47.534 [2024-07-13 06:13:39.165524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:26:47.534 [2024-07-13 06:13:39.165534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.534 [2024-07-13 06:13:39.165646] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:47.534 [2024-07-13 06:13:39.165662] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:47.534 [2024-07-13 06:13:39.165685] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:47.534 [2024-07-13 06:13:39.165696] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.534 [2024-07-13 06:13:39.165716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:47.534 [2024-07-13 06:13:39.165726] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:47.534 [2024-07-13 06:13:39.165739] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:47.534 [2024-07-13 06:13:39.165750] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:47.534 [2024-07-13 06:13:39.165761] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:47.534 [2024-07-13 06:13:39.165771] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:47.534 [2024-07-13 06:13:39.165782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:47.534 [2024-07-13 06:13:39.165792] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:47.534 [2024-07-13 06:13:39.165803] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:47.534 [2024-07-13 06:13:39.165813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:47.534 [2024-07-13 06:13:39.165826] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:47.534 [2024-07-13 06:13:39.165836] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.534 [2024-07-13 06:13:39.165847] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:47.534 [2024-07-13 06:13:39.165858] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:47.534 [2024-07-13 06:13:39.165869] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.534 [2024-07-13 06:13:39.165879] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:47.534 [2024-07-13 06:13:39.165890] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:47.534 [2024-07-13 06:13:39.165900] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:47.534 [2024-07-13 06:13:39.165911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:47.534 [2024-07-13 06:13:39.165920] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:47.534 [2024-07-13 06:13:39.165946] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:47.534 [2024-07-13 06:13:39.165955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:47.534 [2024-07-13 06:13:39.165966] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:47.534 [2024-07-13 06:13:39.165975] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:47.534 [2024-07-13 06:13:39.165986] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:47.534 [2024-07-13 06:13:39.165995] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:47.534 [2024-07-13 06:13:39.166007] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:47.534 [2024-07-13 06:13:39.166017] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:47.534 [2024-07-13 06:13:39.166029] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:47.534 [2024-07-13 06:13:39.166039] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:47.534 [2024-07-13 06:13:39.166057] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:47.534 [2024-07-13 06:13:39.166067] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:47.534 [2024-07-13 06:13:39.166078] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:47.534 [2024-07-13 06:13:39.166088] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:47.534 [2024-07-13 06:13:39.166099] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:47.534 [2024-07-13 06:13:39.166108] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.534 [2024-07-13 06:13:39.166134] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:47.534 [2024-07-13 06:13:39.166144] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:47.534 [2024-07-13 06:13:39.166171] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.534 [2024-07-13 06:13:39.166182] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:47.534 [2024-07-13 06:13:39.166194] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:47.534 [2024-07-13 06:13:39.166206] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:47.534 [2024-07-13 06:13:39.166220] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.534 [2024-07-13 06:13:39.166231] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:47.534 [2024-07-13 06:13:39.166243] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:47.534 [2024-07-13 06:13:39.166255] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:47.534 [2024-07-13 06:13:39.166267] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:47.534 [2024-07-13 06:13:39.166277] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:47.534 [2024-07-13 06:13:39.166289] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:47.534 [2024-07-13 06:13:39.166313] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:47.534 [2024-07-13 06:13:39.166333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:47.534 [2024-07-13 06:13:39.166345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:47.534 [2024-07-13 06:13:39.166356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:47.534 [2024-07-13 06:13:39.166367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:47.534 [2024-07-13 06:13:39.166378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:47.534 [2024-07-13 06:13:39.166388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:47.534 [2024-07-13 06:13:39.166400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:47.534 [2024-07-13 06:13:39.166410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:47.534 [2024-07-13 06:13:39.166424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:47.534 [2024-07-13 06:13:39.166434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:47.534 [2024-07-13 06:13:39.166446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:47.534 [2024-07-13 06:13:39.166457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:47.534 [2024-07-13 06:13:39.166468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:47.534 [2024-07-13 06:13:39.166478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:47.534 [2024-07-13 06:13:39.166490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:47.534 [2024-07-13 06:13:39.166501] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:47.534 [2024-07-13 06:13:39.166514] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:47.534 [2024-07-13 06:13:39.166525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:47.534 [2024-07-13 06:13:39.166537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:47.534 [2024-07-13 06:13:39.166547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:47.534 [2024-07-13 06:13:39.166559] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:47.534 [2024-07-13 06:13:39.166570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.534 [2024-07-13 06:13:39.166582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:47.534 [2024-07-13 06:13:39.166600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.984 ms 00:26:47.534 [2024-07-13 06:13:39.166623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.534 [2024-07-13 06:13:39.166673] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:47.534 [2024-07-13 06:13:39.166692] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:50.150 [2024-07-13 06:13:41.371838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.150 [2024-07-13 06:13:41.371927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:50.150 [2024-07-13 06:13:41.371946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2205.179 ms 00:26:50.150 [2024-07-13 06:13:41.371959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.150 [2024-07-13 06:13:41.378413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.150 [2024-07-13 06:13:41.378477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:50.150 [2024-07-13 06:13:41.378493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.376 ms 00:26:50.150 [2024-07-13 06:13:41.378506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.150 [2024-07-13 06:13:41.378602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.150 [2024-07-13 06:13:41.378623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:50.150 [2024-07-13 06:13:41.378634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:50.150 [2024-07-13 06:13:41.378645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.150 [2024-07-13 06:13:41.385679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.150 [2024-07-13 06:13:41.385749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:50.150 [2024-07-13 06:13:41.385778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.964 ms 00:26:50.150 [2024-07-13 06:13:41.385790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.150 [2024-07-13 06:13:41.385828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.150 [2024-07-13 06:13:41.385844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:50.150 [2024-07-13 06:13:41.385855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:50.150 [2024-07-13 06:13:41.385866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.150 [2024-07-13 06:13:41.386232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.150 [2024-07-13 06:13:41.386253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:50.150 [2024-07-13 06:13:41.386266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:26:50.150 [2024-07-13 06:13:41.386277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.150 [2024-07-13 06:13:41.386431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.150 [2024-07-13 06:13:41.386453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:50.150 [2024-07-13 06:13:41.386464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:26:50.150 [2024-07-13 06:13:41.386476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.150 [2024-07-13 06:13:41.391564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.150 [2024-07-13 06:13:41.391600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:50.150 [2024-07-13 06:13:41.391630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.065 ms 00:26:50.150 [2024-07-13 06:13:41.391642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.150 [2024-07-13 06:13:41.398956] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:50.150 [2024-07-13 06:13:41.401375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.150 [2024-07-13 06:13:41.401419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:50.150 [2024-07-13 06:13:41.401450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.641 ms 00:26:50.150 [2024-07-13 06:13:41.401461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.150 [2024-07-13 06:13:41.455873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.150 [2024-07-13 06:13:41.455939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:50.150 [2024-07-13 06:13:41.455986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.375 ms 00:26:50.150 [2024-07-13 06:13:41.455996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.150 [2024-07-13 06:13:41.456204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.150 [2024-07-13 06:13:41.456257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:50.150 [2024-07-13 06:13:41.456273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:26:50.150 [2024-07-13 06:13:41.456284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.150 [2024-07-13 06:13:41.459795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.150 [2024-07-13 06:13:41.459830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:50.150 [2024-07-13 06:13:41.459862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.466 ms 00:26:50.150 [2024-07-13 06:13:41.459886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.150 [2024-07-13 06:13:41.462684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.150 [2024-07-13 06:13:41.462719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:50.150 [2024-07-13 06:13:41.462751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.746 ms 00:26:50.150 [2024-07-13 06:13:41.462761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.150 [2024-07-13 06:13:41.463126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.150 [2024-07-13 06:13:41.463165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:50.150 [2024-07-13 06:13:41.463180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:26:50.150 [2024-07-13 06:13:41.463191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.150 [2024-07-13 06:13:41.494783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.150 [2024-07-13 06:13:41.494838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:50.150 [2024-07-13 06:13:41.494872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.558 ms 00:26:50.150 [2024-07-13 06:13:41.494886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.150 [2024-07-13 06:13:41.498728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.150 [2024-07-13 06:13:41.498763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:50.150 [2024-07-13 06:13:41.498796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.794 ms 00:26:50.150 [2024-07-13 06:13:41.498806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.150 [2024-07-13 06:13:41.502095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.150 [2024-07-13 06:13:41.502167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:50.150 [2024-07-13 06:13:41.502186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.246 ms 00:26:50.150 [2024-07-13 06:13:41.502196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.150 [2024-07-13 06:13:41.505722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.150 [2024-07-13 06:13:41.505772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:50.150 [2024-07-13 06:13:41.505804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.483 ms 00:26:50.150 [2024-07-13 06:13:41.505815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.150 [2024-07-13 06:13:41.505866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.150 [2024-07-13 06:13:41.505883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:50.150 [2024-07-13 06:13:41.505896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:50.150 [2024-07-13 06:13:41.505906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.150 [2024-07-13 06:13:41.506008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.150 [2024-07-13 06:13:41.506040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:50.150 [2024-07-13 06:13:41.506053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:26:50.151 [2024-07-13 06:13:41.506066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.151 [2024-07-13 06:13:41.507399] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2350.098 ms, result 0 00:26:50.151 { 00:26:50.151 "name": "ftl0", 00:26:50.151 "uuid": "2bf28cc8-5557-4d51-be79-31ccfa58f7c9" 00:26:50.151 } 00:26:50.151 06:13:41 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:26:50.151 06:13:41 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:50.151 06:13:41 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:26:50.151 06:13:41 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:50.411 [2024-07-13 06:13:41.938373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.411 [2024-07-13 06:13:41.938450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:50.411 [2024-07-13 06:13:41.938470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:50.411 [2024-07-13 06:13:41.938483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.411 [2024-07-13 06:13:41.938527] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:50.411 [2024-07-13 06:13:41.938950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.411 [2024-07-13 06:13:41.938967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:50.411 [2024-07-13 06:13:41.938982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:26:50.411 [2024-07-13 06:13:41.938993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.411 [2024-07-13 06:13:41.939287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.411 [2024-07-13 06:13:41.939305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:50.411 [2024-07-13 06:13:41.939320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:26:50.411 [2024-07-13 06:13:41.939332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.411 [2024-07-13 06:13:41.942071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.411 [2024-07-13 06:13:41.942113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:50.411 [2024-07-13 06:13:41.942143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.715 ms 00:26:50.411 [2024-07-13 06:13:41.942165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.411 [2024-07-13 06:13:41.947287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.411 [2024-07-13 06:13:41.947314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:50.411 [2024-07-13 06:13:41.947343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.097 ms 00:26:50.411 [2024-07-13 06:13:41.947353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.411 [2024-07-13 06:13:41.948694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.411 [2024-07-13 06:13:41.948758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:50.411 [2024-07-13 06:13:41.948776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.260 ms 00:26:50.411 [2024-07-13 06:13:41.948786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.411 [2024-07-13 06:13:41.952750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.411 [2024-07-13 06:13:41.952787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:50.411 [2024-07-13 06:13:41.952819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.910 ms 00:26:50.411 [2024-07-13 06:13:41.952829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.411 [2024-07-13 06:13:41.952954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.411 [2024-07-13 06:13:41.952971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:50.411 [2024-07-13 06:13:41.952988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:26:50.411 [2024-07-13 06:13:41.953014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.411 [2024-07-13 06:13:41.954858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.411 [2024-07-13 06:13:41.954921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:50.411 [2024-07-13 06:13:41.954951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.803 ms 00:26:50.411 [2024-07-13 06:13:41.954960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.412 [2024-07-13 06:13:41.956420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.412 [2024-07-13 06:13:41.956467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:50.412 [2024-07-13 06:13:41.956499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.420 ms 00:26:50.412 [2024-07-13 06:13:41.956522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.412 [2024-07-13 06:13:41.957653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.412 [2024-07-13 06:13:41.957700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:50.412 [2024-07-13 06:13:41.957732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.092 ms 00:26:50.412 [2024-07-13 06:13:41.957742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.412 [2024-07-13 06:13:41.959014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.412 [2024-07-13 06:13:41.959068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:50.412 [2024-07-13 06:13:41.959085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.205 ms 00:26:50.412 [2024-07-13 06:13:41.959095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.412 [2024-07-13 06:13:41.959168] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:50.412 [2024-07-13 06:13:41.959191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.959994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.960004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.960017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.960028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.960041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.960052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.960064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.960075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.960087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.960098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.960112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.960146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.960162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.960173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.960186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.960196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.960209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.960219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.960231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.960242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.960255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.960265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.960278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:50.412 [2024-07-13 06:13:41.960288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:50.413 [2024-07-13 06:13:41.960302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:50.413 [2024-07-13 06:13:41.960313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:50.413 [2024-07-13 06:13:41.960327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:50.413 [2024-07-13 06:13:41.960338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:50.413 [2024-07-13 06:13:41.960350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:50.413 [2024-07-13 06:13:41.960361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:50.413 [2024-07-13 06:13:41.960373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:50.413 [2024-07-13 06:13:41.960384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:50.413 [2024-07-13 06:13:41.960396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:50.413 [2024-07-13 06:13:41.960407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:50.413 [2024-07-13 06:13:41.960420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:50.413 [2024-07-13 06:13:41.960430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:50.413 [2024-07-13 06:13:41.960443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:50.413 [2024-07-13 06:13:41.960454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:50.413 [2024-07-13 06:13:41.960466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:50.413 [2024-07-13 06:13:41.960477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:50.413 [2024-07-13 06:13:41.960490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:50.413 [2024-07-13 06:13:41.960509] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:50.413 [2024-07-13 06:13:41.960524] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2bf28cc8-5557-4d51-be79-31ccfa58f7c9 00:26:50.413 [2024-07-13 06:13:41.960535] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:50.413 [2024-07-13 06:13:41.960546] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:50.413 [2024-07-13 06:13:41.960557] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:50.413 [2024-07-13 06:13:41.960569] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:50.413 [2024-07-13 06:13:41.960579] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:50.413 [2024-07-13 06:13:41.960595] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:50.413 [2024-07-13 06:13:41.960620] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:50.413 [2024-07-13 06:13:41.960631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:50.413 [2024-07-13 06:13:41.960640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:50.413 [2024-07-13 06:13:41.960652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.413 [2024-07-13 06:13:41.960670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:50.413 [2024-07-13 06:13:41.960683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.502 ms 00:26:50.413 [2024-07-13 06:13:41.960694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.413 [2024-07-13 06:13:41.962051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.413 [2024-07-13 06:13:41.962080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:50.413 [2024-07-13 06:13:41.962098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.329 ms 00:26:50.413 [2024-07-13 06:13:41.962110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.413 [2024-07-13 06:13:41.962252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.413 [2024-07-13 06:13:41.962270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:50.413 [2024-07-13 06:13:41.962283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:26:50.413 [2024-07-13 06:13:41.962294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.413 [2024-07-13 06:13:41.967225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.413 [2024-07-13 06:13:41.967254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:50.413 [2024-07-13 06:13:41.967270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.413 [2024-07-13 06:13:41.967291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.413 [2024-07-13 06:13:41.967350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.413 [2024-07-13 06:13:41.967364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:50.413 [2024-07-13 06:13:41.967387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.413 [2024-07-13 06:13:41.967403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.413 [2024-07-13 06:13:41.967492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.413 [2024-07-13 06:13:41.967509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:50.413 [2024-07-13 06:13:41.967524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.413 [2024-07-13 06:13:41.967540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.413 [2024-07-13 06:13:41.967587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.413 [2024-07-13 06:13:41.967601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:50.413 [2024-07-13 06:13:41.967613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.413 [2024-07-13 06:13:41.967623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.413 [2024-07-13 06:13:41.975378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.413 [2024-07-13 06:13:41.975441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:50.413 [2024-07-13 06:13:41.975491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.413 [2024-07-13 06:13:41.975505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.413 [2024-07-13 06:13:41.981942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.413 [2024-07-13 06:13:41.981994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:50.413 [2024-07-13 06:13:41.982029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.413 [2024-07-13 06:13:41.982040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.413 [2024-07-13 06:13:41.982116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.413 [2024-07-13 06:13:41.982134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:50.413 [2024-07-13 06:13:41.982200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.413 [2024-07-13 06:13:41.982215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.413 [2024-07-13 06:13:41.982295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.413 [2024-07-13 06:13:41.982311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:50.413 [2024-07-13 06:13:41.982325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.413 [2024-07-13 06:13:41.982336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.413 [2024-07-13 06:13:41.982500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.413 [2024-07-13 06:13:41.982517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:50.413 [2024-07-13 06:13:41.982531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.413 [2024-07-13 06:13:41.982556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.413 [2024-07-13 06:13:41.982612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.413 [2024-07-13 06:13:41.982637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:50.413 [2024-07-13 06:13:41.982651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.413 [2024-07-13 06:13:41.982665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.413 [2024-07-13 06:13:41.982712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.413 [2024-07-13 06:13:41.982727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:50.413 [2024-07-13 06:13:41.982743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.413 [2024-07-13 06:13:41.982754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.413 [2024-07-13 06:13:41.982810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.413 [2024-07-13 06:13:41.982862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:50.413 [2024-07-13 06:13:41.982877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.413 [2024-07-13 06:13:41.982888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.413 [2024-07-13 06:13:41.983045] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 44.624 ms, result 0 00:26:50.413 true 00:26:50.413 06:13:42 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 96561 00:26:50.413 06:13:42 ftl.ftl_restore_fast -- common/autotest_common.sh@948 -- # '[' -z 96561 ']' 00:26:50.413 06:13:42 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # kill -0 96561 00:26:50.413 06:13:42 ftl.ftl_restore_fast -- common/autotest_common.sh@953 -- # uname 00:26:50.413 06:13:42 ftl.ftl_restore_fast -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:50.413 06:13:42 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96561 00:26:50.413 06:13:42 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:50.413 06:13:42 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:50.413 killing process with pid 96561 00:26:50.413 06:13:42 ftl.ftl_restore_fast -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96561' 00:26:50.413 06:13:42 ftl.ftl_restore_fast -- common/autotest_common.sh@967 -- # kill 96561 00:26:50.413 06:13:42 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # wait 96561 00:26:53.724 06:13:44 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:26:57.014 262144+0 records in 00:26:57.014 262144+0 records out 00:26:57.014 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.72218 s, 288 MB/s 00:26:57.014 06:13:48 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:58.918 06:13:50 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:58.918 [2024-07-13 06:13:50.553872] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:26:58.918 [2024-07-13 06:13:50.554069] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96757 ] 00:26:59.187 [2024-07-13 06:13:50.704187] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.187 [2024-07-13 06:13:50.753038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.187 [2024-07-13 06:13:50.846005] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:59.187 [2024-07-13 06:13:50.846088] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:59.450 [2024-07-13 06:13:51.000661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.450 [2024-07-13 06:13:51.000706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:59.450 [2024-07-13 06:13:51.000724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:59.450 [2024-07-13 06:13:51.000743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.450 [2024-07-13 06:13:51.000803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.450 [2024-07-13 06:13:51.000821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:59.450 [2024-07-13 06:13:51.000835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:26:59.450 [2024-07-13 06:13:51.000851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.450 [2024-07-13 06:13:51.000879] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:59.450 [2024-07-13 06:13:51.001141] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:59.450 [2024-07-13 06:13:51.001205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.450 [2024-07-13 06:13:51.001216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:59.450 [2024-07-13 06:13:51.001227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:26:59.450 [2024-07-13 06:13:51.001240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.450 [2024-07-13 06:13:51.002384] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:59.450 [2024-07-13 06:13:51.004398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.450 [2024-07-13 06:13:51.004434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:59.450 [2024-07-13 06:13:51.004471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.015 ms 00:26:59.450 [2024-07-13 06:13:51.004489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.450 [2024-07-13 06:13:51.004549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.450 [2024-07-13 06:13:51.004566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:59.450 [2024-07-13 06:13:51.004589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:26:59.450 [2024-07-13 06:13:51.004598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.450 [2024-07-13 06:13:51.008787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.450 [2024-07-13 06:13:51.008825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:59.450 [2024-07-13 06:13:51.008855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.117 ms 00:26:59.450 [2024-07-13 06:13:51.008865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.450 [2024-07-13 06:13:51.008955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.450 [2024-07-13 06:13:51.008973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:59.450 [2024-07-13 06:13:51.008983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:26:59.450 [2024-07-13 06:13:51.008993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.450 [2024-07-13 06:13:51.009049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.450 [2024-07-13 06:13:51.009065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:59.450 [2024-07-13 06:13:51.009081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:59.450 [2024-07-13 06:13:51.009139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.450 [2024-07-13 06:13:51.009213] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:59.450 [2024-07-13 06:13:51.010649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.450 [2024-07-13 06:13:51.010680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:59.450 [2024-07-13 06:13:51.010724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.446 ms 00:26:59.450 [2024-07-13 06:13:51.010743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.450 [2024-07-13 06:13:51.010783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.450 [2024-07-13 06:13:51.010800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:59.450 [2024-07-13 06:13:51.010810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:59.450 [2024-07-13 06:13:51.010819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.450 [2024-07-13 06:13:51.010858] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:59.450 [2024-07-13 06:13:51.010885] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:59.450 [2024-07-13 06:13:51.010927] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:59.450 [2024-07-13 06:13:51.010947] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:59.450 [2024-07-13 06:13:51.011032] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:59.450 [2024-07-13 06:13:51.011045] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:59.450 [2024-07-13 06:13:51.011057] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:59.450 [2024-07-13 06:13:51.011069] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:59.450 [2024-07-13 06:13:51.011079] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:59.450 [2024-07-13 06:13:51.011096] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:59.450 [2024-07-13 06:13:51.011105] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:59.450 [2024-07-13 06:13:51.011113] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:59.450 [2024-07-13 06:13:51.011121] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:59.450 [2024-07-13 06:13:51.011131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.450 [2024-07-13 06:13:51.011155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:59.450 [2024-07-13 06:13:51.011385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:26:59.450 [2024-07-13 06:13:51.011443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.450 [2024-07-13 06:13:51.011566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.450 [2024-07-13 06:13:51.011639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:59.450 [2024-07-13 06:13:51.011699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:59.450 [2024-07-13 06:13:51.011737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.450 [2024-07-13 06:13:51.011907] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:59.450 [2024-07-13 06:13:51.011961] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:59.450 [2024-07-13 06:13:51.011999] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:59.450 [2024-07-13 06:13:51.012115] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:59.450 [2024-07-13 06:13:51.012215] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:59.450 [2024-07-13 06:13:51.012255] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:59.450 [2024-07-13 06:13:51.012329] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:59.450 [2024-07-13 06:13:51.012345] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:59.450 [2024-07-13 06:13:51.012356] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:59.450 [2024-07-13 06:13:51.012366] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:59.450 [2024-07-13 06:13:51.012375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:59.450 [2024-07-13 06:13:51.012385] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:59.451 [2024-07-13 06:13:51.012394] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:59.451 [2024-07-13 06:13:51.012404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:59.451 [2024-07-13 06:13:51.012419] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:59.451 [2024-07-13 06:13:51.012429] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:59.451 [2024-07-13 06:13:51.012439] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:59.451 [2024-07-13 06:13:51.012448] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:59.451 [2024-07-13 06:13:51.012457] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:59.451 [2024-07-13 06:13:51.012467] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:59.451 [2024-07-13 06:13:51.012476] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:59.451 [2024-07-13 06:13:51.012485] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:59.451 [2024-07-13 06:13:51.012495] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:59.451 [2024-07-13 06:13:51.012504] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:59.451 [2024-07-13 06:13:51.012514] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:59.451 [2024-07-13 06:13:51.012524] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:59.451 [2024-07-13 06:13:51.012533] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:59.451 [2024-07-13 06:13:51.012542] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:59.451 [2024-07-13 06:13:51.012567] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:59.451 [2024-07-13 06:13:51.012576] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:59.451 [2024-07-13 06:13:51.012589] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:59.451 [2024-07-13 06:13:51.012599] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:59.451 [2024-07-13 06:13:51.012608] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:59.451 [2024-07-13 06:13:51.012617] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:59.451 [2024-07-13 06:13:51.012626] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:59.451 [2024-07-13 06:13:51.012634] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:59.451 [2024-07-13 06:13:51.012643] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:59.451 [2024-07-13 06:13:51.012652] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:59.451 [2024-07-13 06:13:51.012675] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:59.451 [2024-07-13 06:13:51.012684] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:59.451 [2024-07-13 06:13:51.012693] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:59.451 [2024-07-13 06:13:51.012701] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:59.451 [2024-07-13 06:13:51.012712] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:59.451 [2024-07-13 06:13:51.012721] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:59.451 [2024-07-13 06:13:51.012740] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:59.451 [2024-07-13 06:13:51.012750] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:59.451 [2024-07-13 06:13:51.012762] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:59.451 [2024-07-13 06:13:51.012773] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:59.451 [2024-07-13 06:13:51.012783] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:59.451 [2024-07-13 06:13:51.012792] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:59.451 [2024-07-13 06:13:51.012801] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:59.451 [2024-07-13 06:13:51.012809] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:59.451 [2024-07-13 06:13:51.012818] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:59.451 [2024-07-13 06:13:51.012829] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:59.451 [2024-07-13 06:13:51.012855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:59.451 [2024-07-13 06:13:51.012866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:59.451 [2024-07-13 06:13:51.012876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:59.451 [2024-07-13 06:13:51.012886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:59.451 [2024-07-13 06:13:51.012896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:59.451 [2024-07-13 06:13:51.012905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:59.451 [2024-07-13 06:13:51.012915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:59.451 [2024-07-13 06:13:51.012924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:59.451 [2024-07-13 06:13:51.012935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:59.451 [2024-07-13 06:13:51.012945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:59.451 [2024-07-13 06:13:51.012955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:59.451 [2024-07-13 06:13:51.012964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:59.451 [2024-07-13 06:13:51.012973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:59.451 [2024-07-13 06:13:51.012983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:59.451 [2024-07-13 06:13:51.012992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:59.451 [2024-07-13 06:13:51.013001] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:59.451 [2024-07-13 06:13:51.013011] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:59.451 [2024-07-13 06:13:51.013032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:59.451 [2024-07-13 06:13:51.013042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:59.451 [2024-07-13 06:13:51.013067] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:59.451 [2024-07-13 06:13:51.013076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:59.451 [2024-07-13 06:13:51.013098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.451 [2024-07-13 06:13:51.013126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:59.451 [2024-07-13 06:13:51.013142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.266 ms 00:26:59.451 [2024-07-13 06:13:51.013169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.451 [2024-07-13 06:13:51.029891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.451 [2024-07-13 06:13:51.030090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:59.451 [2024-07-13 06:13:51.030250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.641 ms 00:26:59.451 [2024-07-13 06:13:51.030401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.451 [2024-07-13 06:13:51.030571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.451 [2024-07-13 06:13:51.030634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:59.451 [2024-07-13 06:13:51.030741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:26:59.451 [2024-07-13 06:13:51.030858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.451 [2024-07-13 06:13:51.038038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.451 [2024-07-13 06:13:51.038232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:59.451 [2024-07-13 06:13:51.038361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.055 ms 00:26:59.451 [2024-07-13 06:13:51.038530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.451 [2024-07-13 06:13:51.038619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.451 [2024-07-13 06:13:51.038695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:59.451 [2024-07-13 06:13:51.038838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:59.451 [2024-07-13 06:13:51.038887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.451 [2024-07-13 06:13:51.039324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.451 [2024-07-13 06:13:51.039495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:59.451 [2024-07-13 06:13:51.039598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:26:59.451 [2024-07-13 06:13:51.039695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.451 [2024-07-13 06:13:51.039878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.451 [2024-07-13 06:13:51.039937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:59.451 [2024-07-13 06:13:51.040045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:26:59.451 [2024-07-13 06:13:51.040101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.451 [2024-07-13 06:13:51.044460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.451 [2024-07-13 06:13:51.044640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:59.451 [2024-07-13 06:13:51.044749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.286 ms 00:26:59.451 [2024-07-13 06:13:51.044795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.451 [2024-07-13 06:13:51.047201] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:59.451 [2024-07-13 06:13:51.047405] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:59.451 [2024-07-13 06:13:51.047545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.451 [2024-07-13 06:13:51.047586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:59.451 [2024-07-13 06:13:51.047675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.496 ms 00:26:59.451 [2024-07-13 06:13:51.047777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.451 [2024-07-13 06:13:51.060447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.451 [2024-07-13 06:13:51.060611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:59.451 [2024-07-13 06:13:51.060720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.592 ms 00:26:59.451 [2024-07-13 06:13:51.060777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.451 [2024-07-13 06:13:51.062480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.451 [2024-07-13 06:13:51.062641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:59.451 [2024-07-13 06:13:51.062748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.633 ms 00:26:59.451 [2024-07-13 06:13:51.062768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.452 [2024-07-13 06:13:51.064501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.452 [2024-07-13 06:13:51.064537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:59.452 [2024-07-13 06:13:51.064567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.690 ms 00:26:59.452 [2024-07-13 06:13:51.064576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.452 [2024-07-13 06:13:51.064892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.452 [2024-07-13 06:13:51.064911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:59.452 [2024-07-13 06:13:51.064922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:26:59.452 [2024-07-13 06:13:51.064932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.452 [2024-07-13 06:13:51.079909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.452 [2024-07-13 06:13:51.079976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:59.452 [2024-07-13 06:13:51.080011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.953 ms 00:26:59.452 [2024-07-13 06:13:51.080021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.452 [2024-07-13 06:13:51.086817] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:59.452 [2024-07-13 06:13:51.088805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.452 [2024-07-13 06:13:51.088837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:59.452 [2024-07-13 06:13:51.088851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.722 ms 00:26:59.452 [2024-07-13 06:13:51.088860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.452 [2024-07-13 06:13:51.088918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.452 [2024-07-13 06:13:51.088942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:59.452 [2024-07-13 06:13:51.088953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:59.452 [2024-07-13 06:13:51.088963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.452 [2024-07-13 06:13:51.089054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.452 [2024-07-13 06:13:51.089071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:59.452 [2024-07-13 06:13:51.089119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:26:59.452 [2024-07-13 06:13:51.089144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.452 [2024-07-13 06:13:51.089208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.452 [2024-07-13 06:13:51.089226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:59.452 [2024-07-13 06:13:51.089248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:59.452 [2024-07-13 06:13:51.089258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.452 [2024-07-13 06:13:51.089297] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:59.452 [2024-07-13 06:13:51.089313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.452 [2024-07-13 06:13:51.089339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:59.452 [2024-07-13 06:13:51.089349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:26:59.452 [2024-07-13 06:13:51.089362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.452 [2024-07-13 06:13:51.092630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.452 [2024-07-13 06:13:51.092666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:59.452 [2024-07-13 06:13:51.092681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.246 ms 00:26:59.452 [2024-07-13 06:13:51.092691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.452 [2024-07-13 06:13:51.092755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:59.452 [2024-07-13 06:13:51.092772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:59.452 [2024-07-13 06:13:51.092782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:26:59.452 [2024-07-13 06:13:51.092797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:59.452 [2024-07-13 06:13:51.094066] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 92.833 ms, result 0 00:27:42.222  Copying: 23/1024 [MB] (23 MBps) Copying: 47/1024 [MB] (23 MBps) Copying: 71/1024 [MB] (23 MBps) Copying: 94/1024 [MB] (23 MBps) Copying: 118/1024 [MB] (24 MBps) Copying: 143/1024 [MB] (24 MBps) Copying: 167/1024 [MB] (24 MBps) Copying: 191/1024 [MB] (24 MBps) Copying: 215/1024 [MB] (24 MBps) Copying: 240/1024 [MB] (24 MBps) Copying: 264/1024 [MB] (24 MBps) Copying: 288/1024 [MB] (24 MBps) Copying: 312/1024 [MB] (24 MBps) Copying: 337/1024 [MB] (24 MBps) Copying: 361/1024 [MB] (24 MBps) Copying: 386/1024 [MB] (24 MBps) Copying: 410/1024 [MB] (24 MBps) Copying: 434/1024 [MB] (23 MBps) Copying: 458/1024 [MB] (23 MBps) Copying: 482/1024 [MB] (23 MBps) Copying: 506/1024 [MB] (23 MBps) Copying: 530/1024 [MB] (23 MBps) Copying: 554/1024 [MB] (24 MBps) Copying: 578/1024 [MB] (24 MBps) Copying: 602/1024 [MB] (24 MBps) Copying: 626/1024 [MB] (24 MBps) Copying: 650/1024 [MB] (23 MBps) Copying: 674/1024 [MB] (23 MBps) Copying: 698/1024 [MB] (23 MBps) Copying: 722/1024 [MB] (23 MBps) Copying: 746/1024 [MB] (24 MBps) Copying: 770/1024 [MB] (24 MBps) Copying: 794/1024 [MB] (24 MBps) Copying: 817/1024 [MB] (23 MBps) Copying: 841/1024 [MB] (23 MBps) Copying: 865/1024 [MB] (23 MBps) Copying: 888/1024 [MB] (23 MBps) Copying: 912/1024 [MB] (23 MBps) Copying: 936/1024 [MB] (23 MBps) Copying: 960/1024 [MB] (23 MBps) Copying: 984/1024 [MB] (23 MBps) Copying: 1008/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-13 06:14:33.787661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.222 [2024-07-13 06:14:33.787725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:42.222 [2024-07-13 06:14:33.787767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:42.222 [2024-07-13 06:14:33.787777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.222 [2024-07-13 06:14:33.787810] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:42.222 [2024-07-13 06:14:33.788281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.222 [2024-07-13 06:14:33.788304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:42.222 [2024-07-13 06:14:33.788315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:27:42.222 [2024-07-13 06:14:33.788325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.222 [2024-07-13 06:14:33.789995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.222 [2024-07-13 06:14:33.790063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:42.222 [2024-07-13 06:14:33.790092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.626 ms 00:27:42.222 [2024-07-13 06:14:33.790108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.222 [2024-07-13 06:14:33.790137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.222 [2024-07-13 06:14:33.790162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:27:42.222 [2024-07-13 06:14:33.790175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:42.222 [2024-07-13 06:14:33.790184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.222 [2024-07-13 06:14:33.790232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.222 [2024-07-13 06:14:33.790245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:27:42.222 [2024-07-13 06:14:33.790254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:42.222 [2024-07-13 06:14:33.790263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.222 [2024-07-13 06:14:33.790281] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:42.222 [2024-07-13 06:14:33.790311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:42.222 [2024-07-13 06:14:33.790637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.790995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.791004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.791013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.791023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.791032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.791041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.791050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.791071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.791081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.791090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.791099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.791107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.791116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.791125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.791494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.791553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.791689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.791831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.791941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.792032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.792148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.792290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.792346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.792395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.792443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:42.223 [2024-07-13 06:14:33.792569] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:42.223 [2024-07-13 06:14:33.792609] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2bf28cc8-5557-4d51-be79-31ccfa58f7c9 00:27:42.223 [2024-07-13 06:14:33.792636] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:42.223 [2024-07-13 06:14:33.792646] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:27:42.223 [2024-07-13 06:14:33.792655] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:42.223 [2024-07-13 06:14:33.792664] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:42.223 [2024-07-13 06:14:33.792673] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:42.223 [2024-07-13 06:14:33.792682] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:42.223 [2024-07-13 06:14:33.792691] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:42.223 [2024-07-13 06:14:33.792700] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:42.223 [2024-07-13 06:14:33.792709] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:42.223 [2024-07-13 06:14:33.792719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.223 [2024-07-13 06:14:33.792734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:42.223 [2024-07-13 06:14:33.792744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.438 ms 00:27:42.223 [2024-07-13 06:14:33.792754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.223 [2024-07-13 06:14:33.794030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.223 [2024-07-13 06:14:33.794056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:42.223 [2024-07-13 06:14:33.794067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.244 ms 00:27:42.223 [2024-07-13 06:14:33.794076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.223 [2024-07-13 06:14:33.794151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:42.223 [2024-07-13 06:14:33.794165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:42.223 [2024-07-13 06:14:33.794215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:27:42.223 [2024-07-13 06:14:33.794228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.223 [2024-07-13 06:14:33.798162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.223 [2024-07-13 06:14:33.798199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:42.223 [2024-07-13 06:14:33.798213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.223 [2024-07-13 06:14:33.798223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.223 [2024-07-13 06:14:33.798275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.223 [2024-07-13 06:14:33.798298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:42.223 [2024-07-13 06:14:33.798308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.223 [2024-07-13 06:14:33.798316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.223 [2024-07-13 06:14:33.798409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.223 [2024-07-13 06:14:33.798428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:42.223 [2024-07-13 06:14:33.798439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.223 [2024-07-13 06:14:33.798448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.223 [2024-07-13 06:14:33.798468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.223 [2024-07-13 06:14:33.798484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:42.223 [2024-07-13 06:14:33.798495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.223 [2024-07-13 06:14:33.798504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.223 [2024-07-13 06:14:33.805700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.223 [2024-07-13 06:14:33.805757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:42.224 [2024-07-13 06:14:33.805772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.224 [2024-07-13 06:14:33.805781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.224 [2024-07-13 06:14:33.811734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.224 [2024-07-13 06:14:33.811782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:42.224 [2024-07-13 06:14:33.811797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.224 [2024-07-13 06:14:33.811806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.224 [2024-07-13 06:14:33.811835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.224 [2024-07-13 06:14:33.811848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:42.224 [2024-07-13 06:14:33.811857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.224 [2024-07-13 06:14:33.811866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.224 [2024-07-13 06:14:33.811932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.224 [2024-07-13 06:14:33.811947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:42.224 [2024-07-13 06:14:33.811962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.224 [2024-07-13 06:14:33.811971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.224 [2024-07-13 06:14:33.812028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.224 [2024-07-13 06:14:33.812044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:42.224 [2024-07-13 06:14:33.812054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.224 [2024-07-13 06:14:33.812062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.224 [2024-07-13 06:14:33.812095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.224 [2024-07-13 06:14:33.812111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:42.224 [2024-07-13 06:14:33.812120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.224 [2024-07-13 06:14:33.812171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.224 [2024-07-13 06:14:33.812214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.224 [2024-07-13 06:14:33.812227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:42.224 [2024-07-13 06:14:33.812237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.224 [2024-07-13 06:14:33.812246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.224 [2024-07-13 06:14:33.812306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:42.224 [2024-07-13 06:14:33.812321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:42.224 [2024-07-13 06:14:33.812337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:42.224 [2024-07-13 06:14:33.812346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:42.224 [2024-07-13 06:14:33.812473] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 24.771 ms, result 0 00:27:42.482 00:27:42.482 00:27:42.482 06:14:34 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:27:42.740 [2024-07-13 06:14:34.216046] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:42.740 [2024-07-13 06:14:34.216256] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97178 ] 00:27:42.740 [2024-07-13 06:14:34.362456] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.740 [2024-07-13 06:14:34.401224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.000 [2024-07-13 06:14:34.485547] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:43.000 [2024-07-13 06:14:34.485643] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:43.000 [2024-07-13 06:14:34.639102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.000 [2024-07-13 06:14:34.639192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:43.000 [2024-07-13 06:14:34.639222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:43.000 [2024-07-13 06:14:34.639232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.000 [2024-07-13 06:14:34.639336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.000 [2024-07-13 06:14:34.639356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:43.000 [2024-07-13 06:14:34.639370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:27:43.000 [2024-07-13 06:14:34.639380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.000 [2024-07-13 06:14:34.639416] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:43.000 [2024-07-13 06:14:34.639704] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:43.000 [2024-07-13 06:14:34.639740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.000 [2024-07-13 06:14:34.639752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:43.000 [2024-07-13 06:14:34.639763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:27:43.000 [2024-07-13 06:14:34.639776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.000 [2024-07-13 06:14:34.640273] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:27:43.000 [2024-07-13 06:14:34.640316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.000 [2024-07-13 06:14:34.640334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:43.000 [2024-07-13 06:14:34.640351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:27:43.000 [2024-07-13 06:14:34.640360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.000 [2024-07-13 06:14:34.640412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.000 [2024-07-13 06:14:34.640434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:43.000 [2024-07-13 06:14:34.640445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:43.000 [2024-07-13 06:14:34.640454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.000 [2024-07-13 06:14:34.640788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.000 [2024-07-13 06:14:34.640804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:43.000 [2024-07-13 06:14:34.640821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:27:43.000 [2024-07-13 06:14:34.640831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.000 [2024-07-13 06:14:34.640908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.000 [2024-07-13 06:14:34.640924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:43.000 [2024-07-13 06:14:34.640934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:27:43.000 [2024-07-13 06:14:34.640943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.000 [2024-07-13 06:14:34.640973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.001 [2024-07-13 06:14:34.640988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:43.001 [2024-07-13 06:14:34.640998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:43.001 [2024-07-13 06:14:34.641006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.001 [2024-07-13 06:14:34.641043] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:43.001 [2024-07-13 06:14:34.642456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.001 [2024-07-13 06:14:34.642483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:43.001 [2024-07-13 06:14:34.642497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.418 ms 00:27:43.001 [2024-07-13 06:14:34.642510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.001 [2024-07-13 06:14:34.642548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.001 [2024-07-13 06:14:34.642568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:43.001 [2024-07-13 06:14:34.642578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:43.001 [2024-07-13 06:14:34.642599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.001 [2024-07-13 06:14:34.642624] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:43.001 [2024-07-13 06:14:34.642648] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:43.001 [2024-07-13 06:14:34.642690] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:43.001 [2024-07-13 06:14:34.642711] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:27:43.001 [2024-07-13 06:14:34.642812] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:43.001 [2024-07-13 06:14:34.642826] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:43.001 [2024-07-13 06:14:34.642842] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:43.001 [2024-07-13 06:14:34.642855] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:43.001 [2024-07-13 06:14:34.642866] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:43.001 [2024-07-13 06:14:34.642885] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:43.001 [2024-07-13 06:14:34.642894] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:43.001 [2024-07-13 06:14:34.642906] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:43.001 [2024-07-13 06:14:34.642915] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:43.001 [2024-07-13 06:14:34.642925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.001 [2024-07-13 06:14:34.642934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:43.001 [2024-07-13 06:14:34.642951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:27:43.001 [2024-07-13 06:14:34.642960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.001 [2024-07-13 06:14:34.643052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.001 [2024-07-13 06:14:34.643065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:43.001 [2024-07-13 06:14:34.643074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:43.001 [2024-07-13 06:14:34.643083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.001 [2024-07-13 06:14:34.643227] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:43.001 [2024-07-13 06:14:34.643247] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:43.001 [2024-07-13 06:14:34.643258] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:43.001 [2024-07-13 06:14:34.643267] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:43.001 [2024-07-13 06:14:34.643288] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:43.001 [2024-07-13 06:14:34.643300] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:43.001 [2024-07-13 06:14:34.643309] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:43.001 [2024-07-13 06:14:34.643319] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:43.001 [2024-07-13 06:14:34.643328] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:43.001 [2024-07-13 06:14:34.643337] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:43.001 [2024-07-13 06:14:34.643345] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:43.001 [2024-07-13 06:14:34.643354] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:43.001 [2024-07-13 06:14:34.643362] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:43.001 [2024-07-13 06:14:34.643370] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:43.001 [2024-07-13 06:14:34.643379] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:43.001 [2024-07-13 06:14:34.643389] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:43.001 [2024-07-13 06:14:34.643398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:43.001 [2024-07-13 06:14:34.643407] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:43.001 [2024-07-13 06:14:34.643415] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:43.001 [2024-07-13 06:14:34.643424] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:43.001 [2024-07-13 06:14:34.643432] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:43.001 [2024-07-13 06:14:34.643445] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:43.001 [2024-07-13 06:14:34.643454] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:43.001 [2024-07-13 06:14:34.643463] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:43.001 [2024-07-13 06:14:34.643471] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:43.001 [2024-07-13 06:14:34.643480] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:43.001 [2024-07-13 06:14:34.643488] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:43.001 [2024-07-13 06:14:34.643496] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:43.001 [2024-07-13 06:14:34.643505] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:43.001 [2024-07-13 06:14:34.643513] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:43.001 [2024-07-13 06:14:34.643521] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:43.001 [2024-07-13 06:14:34.643530] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:43.001 [2024-07-13 06:14:34.643538] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:43.001 [2024-07-13 06:14:34.643562] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:43.001 [2024-07-13 06:14:34.643570] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:43.001 [2024-07-13 06:14:34.643579] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:43.001 [2024-07-13 06:14:34.643587] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:43.001 [2024-07-13 06:14:34.643599] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:43.001 [2024-07-13 06:14:34.643608] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:43.001 [2024-07-13 06:14:34.643616] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:43.001 [2024-07-13 06:14:34.643624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:43.001 [2024-07-13 06:14:34.643633] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:43.001 [2024-07-13 06:14:34.643642] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:43.001 [2024-07-13 06:14:34.643650] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:43.001 [2024-07-13 06:14:34.643662] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:43.001 [2024-07-13 06:14:34.643671] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:43.001 [2024-07-13 06:14:34.643679] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:43.001 [2024-07-13 06:14:34.643697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:43.001 [2024-07-13 06:14:34.643706] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:43.001 [2024-07-13 06:14:34.643714] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:43.001 [2024-07-13 06:14:34.643723] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:43.001 [2024-07-13 06:14:34.643731] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:43.001 [2024-07-13 06:14:34.643739] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:43.001 [2024-07-13 06:14:34.643752] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:43.001 [2024-07-13 06:14:34.643763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:43.001 [2024-07-13 06:14:34.643774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:43.001 [2024-07-13 06:14:34.643784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:43.001 [2024-07-13 06:14:34.643793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:43.001 [2024-07-13 06:14:34.643802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:43.001 [2024-07-13 06:14:34.643811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:43.001 [2024-07-13 06:14:34.643820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:43.001 [2024-07-13 06:14:34.643829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:43.001 [2024-07-13 06:14:34.643838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:43.001 [2024-07-13 06:14:34.643847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:43.001 [2024-07-13 06:14:34.643856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:43.001 [2024-07-13 06:14:34.643865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:43.001 [2024-07-13 06:14:34.643874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:43.001 [2024-07-13 06:14:34.643883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:43.001 [2024-07-13 06:14:34.643892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:43.001 [2024-07-13 06:14:34.643903] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:43.001 [2024-07-13 06:14:34.643914] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:43.001 [2024-07-13 06:14:34.643924] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:43.002 [2024-07-13 06:14:34.643934] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:43.002 [2024-07-13 06:14:34.643952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:43.002 [2024-07-13 06:14:34.643961] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:43.002 [2024-07-13 06:14:34.643971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.002 [2024-07-13 06:14:34.643981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:43.002 [2024-07-13 06:14:34.643990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:27:43.002 [2024-07-13 06:14:34.643999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.002 [2024-07-13 06:14:34.658917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.002 [2024-07-13 06:14:34.658964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:43.002 [2024-07-13 06:14:34.658998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.865 ms 00:27:43.002 [2024-07-13 06:14:34.659009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.002 [2024-07-13 06:14:34.659104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.002 [2024-07-13 06:14:34.659119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:43.002 [2024-07-13 06:14:34.659144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:27:43.002 [2024-07-13 06:14:34.659153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.002 [2024-07-13 06:14:34.666680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.002 [2024-07-13 06:14:34.666725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:43.002 [2024-07-13 06:14:34.666757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.405 ms 00:27:43.002 [2024-07-13 06:14:34.666771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.002 [2024-07-13 06:14:34.666812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.002 [2024-07-13 06:14:34.666827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:43.002 [2024-07-13 06:14:34.666847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:43.002 [2024-07-13 06:14:34.666856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.002 [2024-07-13 06:14:34.666963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.002 [2024-07-13 06:14:34.666979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:43.002 [2024-07-13 06:14:34.666990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:27:43.002 [2024-07-13 06:14:34.666999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.002 [2024-07-13 06:14:34.667120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.002 [2024-07-13 06:14:34.667135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:43.002 [2024-07-13 06:14:34.667196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:27:43.002 [2024-07-13 06:14:34.667209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.002 [2024-07-13 06:14:34.671652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.002 [2024-07-13 06:14:34.671688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:43.002 [2024-07-13 06:14:34.671719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.416 ms 00:27:43.002 [2024-07-13 06:14:34.671733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.002 [2024-07-13 06:14:34.671875] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:43.002 [2024-07-13 06:14:34.671896] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:43.002 [2024-07-13 06:14:34.671908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.002 [2024-07-13 06:14:34.671917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:43.002 [2024-07-13 06:14:34.671931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:27:43.002 [2024-07-13 06:14:34.671940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.002 [2024-07-13 06:14:34.682766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.002 [2024-07-13 06:14:34.682799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:43.002 [2024-07-13 06:14:34.682829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.798 ms 00:27:43.002 [2024-07-13 06:14:34.682849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.002 [2024-07-13 06:14:34.682953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.002 [2024-07-13 06:14:34.682967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:43.002 [2024-07-13 06:14:34.682982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:27:43.002 [2024-07-13 06:14:34.682992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.002 [2024-07-13 06:14:34.683047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.002 [2024-07-13 06:14:34.683076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:43.002 [2024-07-13 06:14:34.683087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:27:43.002 [2024-07-13 06:14:34.683096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.002 [2024-07-13 06:14:34.683451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.002 [2024-07-13 06:14:34.683485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:43.002 [2024-07-13 06:14:34.683508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:27:43.002 [2024-07-13 06:14:34.683525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.002 [2024-07-13 06:14:34.683547] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:27:43.002 [2024-07-13 06:14:34.683565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.002 [2024-07-13 06:14:34.683575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:43.002 [2024-07-13 06:14:34.683602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:27:43.002 [2024-07-13 06:14:34.683611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.002 [2024-07-13 06:14:34.691115] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:43.002 [2024-07-13 06:14:34.691346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.002 [2024-07-13 06:14:34.691364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:43.002 [2024-07-13 06:14:34.691380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.712 ms 00:27:43.002 [2024-07-13 06:14:34.691390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.002 [2024-07-13 06:14:34.693571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.002 [2024-07-13 06:14:34.693603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:43.002 [2024-07-13 06:14:34.693632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.156 ms 00:27:43.002 [2024-07-13 06:14:34.693641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.002 [2024-07-13 06:14:34.693725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.002 [2024-07-13 06:14:34.693741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:43.002 [2024-07-13 06:14:34.693757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:43.002 [2024-07-13 06:14:34.693765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.002 [2024-07-13 06:14:34.693809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.002 [2024-07-13 06:14:34.693824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:43.002 [2024-07-13 06:14:34.693834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:43.002 [2024-07-13 06:14:34.693843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.002 [2024-07-13 06:14:34.693887] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:43.002 [2024-07-13 06:14:34.693901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.002 [2024-07-13 06:14:34.693910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:43.002 [2024-07-13 06:14:34.693920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:27:43.002 [2024-07-13 06:14:34.693933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.002 [2024-07-13 06:14:34.697652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.002 [2024-07-13 06:14:34.697689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:43.002 [2024-07-13 06:14:34.697714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.699 ms 00:27:43.002 [2024-07-13 06:14:34.697735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.002 [2024-07-13 06:14:34.697799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.002 [2024-07-13 06:14:34.697815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:43.002 [2024-07-13 06:14:34.697835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:43.002 [2024-07-13 06:14:34.697850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.002 [2024-07-13 06:14:34.699103] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 59.411 ms, result 0 00:28:29.344  Copying: 22/1024 [MB] (22 MBps) Copying: 44/1024 [MB] (22 MBps) Copying: 66/1024 [MB] (22 MBps) Copying: 88/1024 [MB] (22 MBps) Copying: 111/1024 [MB] (22 MBps) Copying: 133/1024 [MB] (22 MBps) Copying: 154/1024 [MB] (21 MBps) Copying: 176/1024 [MB] (21 MBps) Copying: 198/1024 [MB] (22 MBps) Copying: 221/1024 [MB] (22 MBps) Copying: 243/1024 [MB] (22 MBps) Copying: 266/1024 [MB] (22 MBps) Copying: 289/1024 [MB] (22 MBps) Copying: 311/1024 [MB] (22 MBps) Copying: 334/1024 [MB] (22 MBps) Copying: 358/1024 [MB] (23 MBps) Copying: 381/1024 [MB] (23 MBps) Copying: 404/1024 [MB] (22 MBps) Copying: 426/1024 [MB] (22 MBps) Copying: 449/1024 [MB] (22 MBps) Copying: 472/1024 [MB] (22 MBps) Copying: 495/1024 [MB] (22 MBps) Copying: 517/1024 [MB] (22 MBps) Copying: 541/1024 [MB] (23 MBps) Copying: 564/1024 [MB] (23 MBps) Copying: 587/1024 [MB] (22 MBps) Copying: 610/1024 [MB] (23 MBps) Copying: 632/1024 [MB] (22 MBps) Copying: 654/1024 [MB] (22 MBps) Copying: 677/1024 [MB] (22 MBps) Copying: 699/1024 [MB] (22 MBps) Copying: 721/1024 [MB] (22 MBps) Copying: 743/1024 [MB] (21 MBps) Copying: 765/1024 [MB] (22 MBps) Copying: 787/1024 [MB] (22 MBps) Copying: 809/1024 [MB] (21 MBps) Copying: 832/1024 [MB] (22 MBps) Copying: 854/1024 [MB] (22 MBps) Copying: 876/1024 [MB] (22 MBps) Copying: 898/1024 [MB] (22 MBps) Copying: 920/1024 [MB] (22 MBps) Copying: 943/1024 [MB] (22 MBps) Copying: 965/1024 [MB] (22 MBps) Copying: 987/1024 [MB] (22 MBps) Copying: 1009/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-07-13 06:15:20.842517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.344 [2024-07-13 06:15:20.842589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:29.344 [2024-07-13 06:15:20.842626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:29.344 [2024-07-13 06:15:20.842649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.344 [2024-07-13 06:15:20.842679] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:29.344 [2024-07-13 06:15:20.843207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.344 [2024-07-13 06:15:20.843229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:29.344 [2024-07-13 06:15:20.843242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:28:29.344 [2024-07-13 06:15:20.843253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.344 [2024-07-13 06:15:20.843512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.344 [2024-07-13 06:15:20.843531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:29.344 [2024-07-13 06:15:20.843543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:28:29.344 [2024-07-13 06:15:20.843569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.344 [2024-07-13 06:15:20.843642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.344 [2024-07-13 06:15:20.843655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:29.344 [2024-07-13 06:15:20.843666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:29.344 [2024-07-13 06:15:20.843676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.344 [2024-07-13 06:15:20.843748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.344 [2024-07-13 06:15:20.843763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:29.344 [2024-07-13 06:15:20.843775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:28:29.344 [2024-07-13 06:15:20.843785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.344 [2024-07-13 06:15:20.843804] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:29.344 [2024-07-13 06:15:20.843826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.843839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.843850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.843861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.843871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.843882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.843893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.843903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.843914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.843925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.843936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.843947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.843959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.843971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.843982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.843992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.844003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.844014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.844025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.844036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.844046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.844057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.844068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.844079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.844090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.844100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.844111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.844122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.844132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.844144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.844155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.844166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.844193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.844204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.844215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.844244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.844256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:29.344 [2024-07-13 06:15:20.844267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.844979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.845003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:29.345 [2024-07-13 06:15:20.845022] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:29.345 [2024-07-13 06:15:20.845033] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2bf28cc8-5557-4d51-be79-31ccfa58f7c9 00:28:29.345 [2024-07-13 06:15:20.845046] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:29.345 [2024-07-13 06:15:20.845056] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:28:29.345 [2024-07-13 06:15:20.845065] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:29.345 [2024-07-13 06:15:20.845076] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:29.345 [2024-07-13 06:15:20.845126] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:29.345 [2024-07-13 06:15:20.845161] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:29.345 [2024-07-13 06:15:20.845173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:29.345 [2024-07-13 06:15:20.845183] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:29.345 [2024-07-13 06:15:20.845194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:29.345 [2024-07-13 06:15:20.845205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.345 [2024-07-13 06:15:20.845217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:29.345 [2024-07-13 06:15:20.845233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.402 ms 00:28:29.345 [2024-07-13 06:15:20.845244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.345 [2024-07-13 06:15:20.846819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.345 [2024-07-13 06:15:20.846849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:29.345 [2024-07-13 06:15:20.846862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.552 ms 00:28:29.345 [2024-07-13 06:15:20.846871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.345 [2024-07-13 06:15:20.846965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:29.345 [2024-07-13 06:15:20.846985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:29.345 [2024-07-13 06:15:20.847004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:28:29.345 [2024-07-13 06:15:20.847014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.345 [2024-07-13 06:15:20.851872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.345 [2024-07-13 06:15:20.851907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:29.345 [2024-07-13 06:15:20.851920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.345 [2024-07-13 06:15:20.851930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.345 [2024-07-13 06:15:20.851984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.345 [2024-07-13 06:15:20.852005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:29.345 [2024-07-13 06:15:20.852016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.345 [2024-07-13 06:15:20.852025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.345 [2024-07-13 06:15:20.852064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.345 [2024-07-13 06:15:20.852088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:29.345 [2024-07-13 06:15:20.852098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.345 [2024-07-13 06:15:20.852108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.345 [2024-07-13 06:15:20.852127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.345 [2024-07-13 06:15:20.852171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:29.345 [2024-07-13 06:15:20.852187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.345 [2024-07-13 06:15:20.852205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.345 [2024-07-13 06:15:20.862209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.345 [2024-07-13 06:15:20.862270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:29.345 [2024-07-13 06:15:20.862305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.346 [2024-07-13 06:15:20.862316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.346 [2024-07-13 06:15:20.871801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.346 [2024-07-13 06:15:20.872842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:29.346 [2024-07-13 06:15:20.873046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.346 [2024-07-13 06:15:20.873163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.346 [2024-07-13 06:15:20.873547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.346 [2024-07-13 06:15:20.873827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:29.346 [2024-07-13 06:15:20.873868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.346 [2024-07-13 06:15:20.873890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.346 [2024-07-13 06:15:20.873964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.346 [2024-07-13 06:15:20.874005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:29.346 [2024-07-13 06:15:20.874025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.346 [2024-07-13 06:15:20.874051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.346 [2024-07-13 06:15:20.874206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.346 [2024-07-13 06:15:20.874235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:29.346 [2024-07-13 06:15:20.874256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.346 [2024-07-13 06:15:20.874273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.346 [2024-07-13 06:15:20.874328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.346 [2024-07-13 06:15:20.874355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:29.346 [2024-07-13 06:15:20.874405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.346 [2024-07-13 06:15:20.874424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.346 [2024-07-13 06:15:20.874528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.346 [2024-07-13 06:15:20.874553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:29.346 [2024-07-13 06:15:20.874572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.346 [2024-07-13 06:15:20.874592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.346 [2024-07-13 06:15:20.874669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:29.346 [2024-07-13 06:15:20.874711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:29.346 [2024-07-13 06:15:20.874732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:29.346 [2024-07-13 06:15:20.874757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:29.346 [2024-07-13 06:15:20.874977] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 32.405 ms, result 0 00:28:29.604 00:28:29.604 00:28:29.604 06:15:21 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:31.504 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:31.504 06:15:22 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:28:31.504 [2024-07-13 06:15:22.968591] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:31.504 [2024-07-13 06:15:22.968978] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97645 ] 00:28:31.504 [2024-07-13 06:15:23.117972] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.504 [2024-07-13 06:15:23.166735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.763 [2024-07-13 06:15:23.256360] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:31.763 [2024-07-13 06:15:23.256715] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:31.763 [2024-07-13 06:15:23.411579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.763 [2024-07-13 06:15:23.411812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:31.763 [2024-07-13 06:15:23.411947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:31.763 [2024-07-13 06:15:23.411998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.763 [2024-07-13 06:15:23.412245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.763 [2024-07-13 06:15:23.412365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:31.763 [2024-07-13 06:15:23.412507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:31.763 [2024-07-13 06:15:23.412573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.763 [2024-07-13 06:15:23.412709] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:31.763 [2024-07-13 06:15:23.413031] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:31.763 [2024-07-13 06:15:23.413347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.763 [2024-07-13 06:15:23.413536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:31.763 [2024-07-13 06:15:23.413602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:28:31.763 [2024-07-13 06:15:23.413698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.763 [2024-07-13 06:15:23.414166] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:28:31.763 [2024-07-13 06:15:23.414216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.763 [2024-07-13 06:15:23.414230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:31.763 [2024-07-13 06:15:23.414247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:28:31.763 [2024-07-13 06:15:23.414257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.763 [2024-07-13 06:15:23.414307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.763 [2024-07-13 06:15:23.414321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:31.763 [2024-07-13 06:15:23.414341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:28:31.763 [2024-07-13 06:15:23.414358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.763 [2024-07-13 06:15:23.414699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.763 [2024-07-13 06:15:23.414723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:31.763 [2024-07-13 06:15:23.414740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:28:31.763 [2024-07-13 06:15:23.414750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.763 [2024-07-13 06:15:23.414831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.763 [2024-07-13 06:15:23.414849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:31.763 [2024-07-13 06:15:23.414859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:28:31.763 [2024-07-13 06:15:23.414868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.763 [2024-07-13 06:15:23.414900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.763 [2024-07-13 06:15:23.414916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:31.763 [2024-07-13 06:15:23.414926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:31.763 [2024-07-13 06:15:23.414935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.763 [2024-07-13 06:15:23.414976] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:31.763 [2024-07-13 06:15:23.416566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.763 [2024-07-13 06:15:23.416744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:31.763 [2024-07-13 06:15:23.416886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.595 ms 00:28:31.763 [2024-07-13 06:15:23.416939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.763 [2024-07-13 06:15:23.417052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.763 [2024-07-13 06:15:23.417163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:31.763 [2024-07-13 06:15:23.417218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:31.763 [2024-07-13 06:15:23.417266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.763 [2024-07-13 06:15:23.417448] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:31.763 [2024-07-13 06:15:23.417526] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:31.763 [2024-07-13 06:15:23.417621] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:31.763 [2024-07-13 06:15:23.417730] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:28:31.763 [2024-07-13 06:15:23.417882] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:31.763 [2024-07-13 06:15:23.418022] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:31.763 [2024-07-13 06:15:23.418092] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:31.763 [2024-07-13 06:15:23.418192] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:31.763 [2024-07-13 06:15:23.418327] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:31.763 [2024-07-13 06:15:23.418382] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:31.763 [2024-07-13 06:15:23.418435] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:31.763 [2024-07-13 06:15:23.418474] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:31.763 [2024-07-13 06:15:23.418581] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:31.763 [2024-07-13 06:15:23.418623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.763 [2024-07-13 06:15:23.418660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:31.763 [2024-07-13 06:15:23.418696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.177 ms 00:28:31.763 [2024-07-13 06:15:23.418731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.763 [2024-07-13 06:15:23.418855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.763 [2024-07-13 06:15:23.418902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:31.763 [2024-07-13 06:15:23.418940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:28:31.763 [2024-07-13 06:15:23.418955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.763 [2024-07-13 06:15:23.419051] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:31.763 [2024-07-13 06:15:23.419068] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:31.763 [2024-07-13 06:15:23.419079] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:31.763 [2024-07-13 06:15:23.419089] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.763 [2024-07-13 06:15:23.419099] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:31.763 [2024-07-13 06:15:23.419113] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:31.763 [2024-07-13 06:15:23.419124] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:31.763 [2024-07-13 06:15:23.419134] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:31.763 [2024-07-13 06:15:23.419143] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:31.763 [2024-07-13 06:15:23.419153] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:31.763 [2024-07-13 06:15:23.419177] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:31.763 [2024-07-13 06:15:23.419188] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:31.763 [2024-07-13 06:15:23.419197] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:31.763 [2024-07-13 06:15:23.419207] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:31.763 [2024-07-13 06:15:23.419216] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:31.763 [2024-07-13 06:15:23.419226] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.763 [2024-07-13 06:15:23.419236] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:31.763 [2024-07-13 06:15:23.419245] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:31.763 [2024-07-13 06:15:23.419254] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.763 [2024-07-13 06:15:23.419264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:31.764 [2024-07-13 06:15:23.419273] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:31.764 [2024-07-13 06:15:23.419285] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:31.764 [2024-07-13 06:15:23.419295] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:31.764 [2024-07-13 06:15:23.419304] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:31.764 [2024-07-13 06:15:23.419313] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:31.764 [2024-07-13 06:15:23.419323] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:31.764 [2024-07-13 06:15:23.419332] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:31.764 [2024-07-13 06:15:23.419341] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:31.764 [2024-07-13 06:15:23.419350] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:31.764 [2024-07-13 06:15:23.419359] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:31.764 [2024-07-13 06:15:23.419368] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:31.764 [2024-07-13 06:15:23.419378] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:31.764 [2024-07-13 06:15:23.419387] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:31.764 [2024-07-13 06:15:23.419396] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:31.764 [2024-07-13 06:15:23.419406] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:31.764 [2024-07-13 06:15:23.419417] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:31.764 [2024-07-13 06:15:23.419427] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:31.764 [2024-07-13 06:15:23.419441] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:31.764 [2024-07-13 06:15:23.419451] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:31.764 [2024-07-13 06:15:23.419460] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.764 [2024-07-13 06:15:23.419469] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:31.764 [2024-07-13 06:15:23.419479] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:31.764 [2024-07-13 06:15:23.419488] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.764 [2024-07-13 06:15:23.419512] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:31.764 [2024-07-13 06:15:23.419525] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:31.764 [2024-07-13 06:15:23.419534] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:31.764 [2024-07-13 06:15:23.419553] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.764 [2024-07-13 06:15:23.419570] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:31.764 [2024-07-13 06:15:23.419579] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:31.764 [2024-07-13 06:15:23.419588] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:31.764 [2024-07-13 06:15:23.419598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:31.764 [2024-07-13 06:15:23.419607] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:31.764 [2024-07-13 06:15:23.419616] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:31.764 [2024-07-13 06:15:23.419645] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:31.764 [2024-07-13 06:15:23.419659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:31.764 [2024-07-13 06:15:23.419670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:31.764 [2024-07-13 06:15:23.419681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:31.764 [2024-07-13 06:15:23.419691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:31.764 [2024-07-13 06:15:23.419702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:31.764 [2024-07-13 06:15:23.419726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:31.764 [2024-07-13 06:15:23.419737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:31.764 [2024-07-13 06:15:23.419747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:31.764 [2024-07-13 06:15:23.419757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:31.764 [2024-07-13 06:15:23.419768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:31.764 [2024-07-13 06:15:23.419778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:31.764 [2024-07-13 06:15:23.419789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:31.764 [2024-07-13 06:15:23.419801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:31.764 [2024-07-13 06:15:23.419812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:31.764 [2024-07-13 06:15:23.419823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:31.764 [2024-07-13 06:15:23.419836] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:31.764 [2024-07-13 06:15:23.419848] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:31.764 [2024-07-13 06:15:23.419859] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:31.764 [2024-07-13 06:15:23.419870] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:31.764 [2024-07-13 06:15:23.419902] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:31.764 [2024-07-13 06:15:23.419913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:31.764 [2024-07-13 06:15:23.419924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.764 [2024-07-13 06:15:23.419935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:31.764 [2024-07-13 06:15:23.419946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.931 ms 00:28:31.764 [2024-07-13 06:15:23.419956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.764 [2024-07-13 06:15:23.434896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.764 [2024-07-13 06:15:23.434966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:31.764 [2024-07-13 06:15:23.435000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.882 ms 00:28:31.764 [2024-07-13 06:15:23.435010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.764 [2024-07-13 06:15:23.435105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.764 [2024-07-13 06:15:23.435135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:31.764 [2024-07-13 06:15:23.435162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:28:31.764 [2024-07-13 06:15:23.435190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.764 [2024-07-13 06:15:23.442400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.764 [2024-07-13 06:15:23.442458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:31.764 [2024-07-13 06:15:23.442488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.118 ms 00:28:31.764 [2024-07-13 06:15:23.442503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.764 [2024-07-13 06:15:23.442547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.764 [2024-07-13 06:15:23.442561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:31.764 [2024-07-13 06:15:23.442572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:31.764 [2024-07-13 06:15:23.442582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.764 [2024-07-13 06:15:23.442716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.764 [2024-07-13 06:15:23.442733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:31.764 [2024-07-13 06:15:23.442745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:28:31.764 [2024-07-13 06:15:23.442756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.764 [2024-07-13 06:15:23.442887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.764 [2024-07-13 06:15:23.442905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:31.764 [2024-07-13 06:15:23.442916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:28:31.764 [2024-07-13 06:15:23.442926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.764 [2024-07-13 06:15:23.447562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.764 [2024-07-13 06:15:23.447599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:31.764 [2024-07-13 06:15:23.447629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.609 ms 00:28:31.764 [2024-07-13 06:15:23.447644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.764 [2024-07-13 06:15:23.447837] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:31.764 [2024-07-13 06:15:23.447864] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:31.764 [2024-07-13 06:15:23.447886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.764 [2024-07-13 06:15:23.447905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:31.764 [2024-07-13 06:15:23.447920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:28:31.764 [2024-07-13 06:15:23.447931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.764 [2024-07-13 06:15:23.462527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.764 [2024-07-13 06:15:23.462574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:31.764 [2024-07-13 06:15:23.462603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.575 ms 00:28:31.764 [2024-07-13 06:15:23.462613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.764 [2024-07-13 06:15:23.462717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.764 [2024-07-13 06:15:23.462731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:31.764 [2024-07-13 06:15:23.462745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:28:31.764 [2024-07-13 06:15:23.462756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.764 [2024-07-13 06:15:23.462844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.764 [2024-07-13 06:15:23.462861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:31.764 [2024-07-13 06:15:23.462872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:28:31.764 [2024-07-13 06:15:23.462882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.764 [2024-07-13 06:15:23.463259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.764 [2024-07-13 06:15:23.463278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:31.764 [2024-07-13 06:15:23.463290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:28:31.764 [2024-07-13 06:15:23.463300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.764 [2024-07-13 06:15:23.463321] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:28:31.764 [2024-07-13 06:15:23.463343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.764 [2024-07-13 06:15:23.463353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:31.764 [2024-07-13 06:15:23.463367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:28:31.764 [2024-07-13 06:15:23.463377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.764 [2024-07-13 06:15:23.471222] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:31.764 [2024-07-13 06:15:23.471424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.765 [2024-07-13 06:15:23.471441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:31.765 [2024-07-13 06:15:23.471456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.000 ms 00:28:31.765 [2024-07-13 06:15:23.471481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.765 [2024-07-13 06:15:23.473717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.765 [2024-07-13 06:15:23.473762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:31.765 [2024-07-13 06:15:23.473791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.182 ms 00:28:31.765 [2024-07-13 06:15:23.473801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.765 [2024-07-13 06:15:23.473875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.765 [2024-07-13 06:15:23.473892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:31.765 [2024-07-13 06:15:23.473906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:28:31.765 [2024-07-13 06:15:23.473916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.765 [2024-07-13 06:15:23.473960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.765 [2024-07-13 06:15:23.474004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:31.765 [2024-07-13 06:15:23.474016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:31.765 [2024-07-13 06:15:23.474026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.765 [2024-07-13 06:15:23.474066] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:31.765 [2024-07-13 06:15:23.474082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.765 [2024-07-13 06:15:23.474092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:31.765 [2024-07-13 06:15:23.474103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:28:31.765 [2024-07-13 06:15:23.474116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.765 [2024-07-13 06:15:23.478530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.765 [2024-07-13 06:15:23.478569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:31.765 [2024-07-13 06:15:23.478599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.389 ms 00:28:31.765 [2024-07-13 06:15:23.478609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.765 [2024-07-13 06:15:23.478678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.765 [2024-07-13 06:15:23.478710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:31.765 [2024-07-13 06:15:23.478736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:28:31.765 [2024-07-13 06:15:23.478746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.765 [2024-07-13 06:15:23.479914] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 67.750 ms, result 0 00:29:17.707  Copying: 22/1024 [MB] (22 MBps) Copying: 44/1024 [MB] (22 MBps) Copying: 67/1024 [MB] (22 MBps) Copying: 90/1024 [MB] (22 MBps) Copying: 114/1024 [MB] (23 MBps) Copying: 137/1024 [MB] (23 MBps) Copying: 160/1024 [MB] (23 MBps) Copying: 184/1024 [MB] (23 MBps) Copying: 207/1024 [MB] (23 MBps) Copying: 230/1024 [MB] (23 MBps) Copying: 253/1024 [MB] (23 MBps) Copying: 277/1024 [MB] (23 MBps) Copying: 300/1024 [MB] (22 MBps) Copying: 323/1024 [MB] (22 MBps) Copying: 345/1024 [MB] (22 MBps) Copying: 369/1024 [MB] (23 MBps) Copying: 392/1024 [MB] (23 MBps) Copying: 415/1024 [MB] (22 MBps) Copying: 437/1024 [MB] (22 MBps) Copying: 460/1024 [MB] (22 MBps) Copying: 483/1024 [MB] (23 MBps) Copying: 507/1024 [MB] (23 MBps) Copying: 530/1024 [MB] (23 MBps) Copying: 553/1024 [MB] (22 MBps) Copying: 576/1024 [MB] (22 MBps) Copying: 598/1024 [MB] (22 MBps) Copying: 621/1024 [MB] (22 MBps) Copying: 644/1024 [MB] (22 MBps) Copying: 667/1024 [MB] (22 MBps) Copying: 689/1024 [MB] (22 MBps) Copying: 713/1024 [MB] (23 MBps) Copying: 736/1024 [MB] (22 MBps) Copying: 758/1024 [MB] (22 MBps) Copying: 781/1024 [MB] (22 MBps) Copying: 804/1024 [MB] (22 MBps) Copying: 827/1024 [MB] (23 MBps) Copying: 850/1024 [MB] (22 MBps) Copying: 873/1024 [MB] (22 MBps) Copying: 896/1024 [MB] (22 MBps) Copying: 918/1024 [MB] (22 MBps) Copying: 941/1024 [MB] (22 MBps) Copying: 964/1024 [MB] (22 MBps) Copying: 987/1024 [MB] (23 MBps) Copying: 1010/1024 [MB] (22 MBps) Copying: 1023/1024 [MB] (13 MBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-07-13 06:16:09.147202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.707 [2024-07-13 06:16:09.147298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:17.707 [2024-07-13 06:16:09.147334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:17.707 [2024-07-13 06:16:09.147345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.707 [2024-07-13 06:16:09.148680] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:17.707 [2024-07-13 06:16:09.153686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.707 [2024-07-13 06:16:09.153880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:17.707 [2024-07-13 06:16:09.153997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.881 ms 00:29:17.707 [2024-07-13 06:16:09.154055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.707 [2024-07-13 06:16:09.163823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.707 [2024-07-13 06:16:09.164014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:17.707 [2024-07-13 06:16:09.164155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.195 ms 00:29:17.707 [2024-07-13 06:16:09.164207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.707 [2024-07-13 06:16:09.164275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.707 [2024-07-13 06:16:09.164365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:17.707 [2024-07-13 06:16:09.164422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:17.707 [2024-07-13 06:16:09.164458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.707 [2024-07-13 06:16:09.164560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.707 [2024-07-13 06:16:09.164604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:17.707 [2024-07-13 06:16:09.164707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:29:17.707 [2024-07-13 06:16:09.164722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.707 [2024-07-13 06:16:09.164742] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:17.707 [2024-07-13 06:16:09.164758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 121088 / 261120 wr_cnt: 1 state: open 00:29:17.707 [2024-07-13 06:16:09.164771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.164782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.164792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.164802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.164812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.164822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.164833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.164843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.164853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.164864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.164874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.164884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.164894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.164904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.164914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.164924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.164934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.164945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.164955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.164965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.164975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.164986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.164996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.165006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.165016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.165026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.165036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.165062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.165072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.165110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.165138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.165177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.165191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.165201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.165212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.165223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.165234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.165244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.165271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.165282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.165307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.165318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.165329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:17.707 [2024-07-13 06:16:09.165340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.165999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.166023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:17.708 [2024-07-13 06:16:09.166062] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:17.708 [2024-07-13 06:16:09.166088] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2bf28cc8-5557-4d51-be79-31ccfa58f7c9 00:29:17.708 [2024-07-13 06:16:09.166099] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 121088 00:29:17.708 [2024-07-13 06:16:09.166108] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 121120 00:29:17.708 [2024-07-13 06:16:09.166117] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 121088 00:29:17.708 [2024-07-13 06:16:09.166131] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:29:17.708 [2024-07-13 06:16:09.166148] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:17.708 [2024-07-13 06:16:09.166159] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:17.708 [2024-07-13 06:16:09.166168] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:17.708 [2024-07-13 06:16:09.166177] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:17.708 [2024-07-13 06:16:09.166186] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:17.708 [2024-07-13 06:16:09.166196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.708 [2024-07-13 06:16:09.166206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:17.708 [2024-07-13 06:16:09.166216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.454 ms 00:29:17.708 [2024-07-13 06:16:09.166226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.708 [2024-07-13 06:16:09.167571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.708 [2024-07-13 06:16:09.167629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:17.708 [2024-07-13 06:16:09.167642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.325 ms 00:29:17.708 [2024-07-13 06:16:09.167652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.708 [2024-07-13 06:16:09.167747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.708 [2024-07-13 06:16:09.167762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:17.708 [2024-07-13 06:16:09.167773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:29:17.708 [2024-07-13 06:16:09.167793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.708 [2024-07-13 06:16:09.171941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:17.708 [2024-07-13 06:16:09.171989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:17.708 [2024-07-13 06:16:09.172002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:17.709 [2024-07-13 06:16:09.172011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.709 [2024-07-13 06:16:09.172075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:17.709 [2024-07-13 06:16:09.172088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:17.709 [2024-07-13 06:16:09.172098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:17.709 [2024-07-13 06:16:09.172107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.709 [2024-07-13 06:16:09.172167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:17.709 [2024-07-13 06:16:09.172190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:17.709 [2024-07-13 06:16:09.172201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:17.709 [2024-07-13 06:16:09.172218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.709 [2024-07-13 06:16:09.172284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:17.709 [2024-07-13 06:16:09.172296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:17.709 [2024-07-13 06:16:09.172306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:17.709 [2024-07-13 06:16:09.172316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.709 [2024-07-13 06:16:09.179466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:17.709 [2024-07-13 06:16:09.179513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:17.709 [2024-07-13 06:16:09.179543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:17.709 [2024-07-13 06:16:09.179553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.709 [2024-07-13 06:16:09.186328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:17.709 [2024-07-13 06:16:09.186371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:17.709 [2024-07-13 06:16:09.186401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:17.709 [2024-07-13 06:16:09.186411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.709 [2024-07-13 06:16:09.186472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:17.709 [2024-07-13 06:16:09.186487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:17.709 [2024-07-13 06:16:09.186504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:17.709 [2024-07-13 06:16:09.186514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.709 [2024-07-13 06:16:09.186541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:17.709 [2024-07-13 06:16:09.186553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:17.709 [2024-07-13 06:16:09.186563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:17.709 [2024-07-13 06:16:09.186571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.709 [2024-07-13 06:16:09.186659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:17.709 [2024-07-13 06:16:09.186691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:17.709 [2024-07-13 06:16:09.186707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:17.709 [2024-07-13 06:16:09.186717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.709 [2024-07-13 06:16:09.186774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:17.709 [2024-07-13 06:16:09.186791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:17.709 [2024-07-13 06:16:09.186803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:17.709 [2024-07-13 06:16:09.186813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.709 [2024-07-13 06:16:09.186854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:17.709 [2024-07-13 06:16:09.186867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:17.709 [2024-07-13 06:16:09.186889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:17.709 [2024-07-13 06:16:09.186903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.709 [2024-07-13 06:16:09.186952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:17.709 [2024-07-13 06:16:09.186973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:17.709 [2024-07-13 06:16:09.186985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:17.709 [2024-07-13 06:16:09.186995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.709 [2024-07-13 06:16:09.187147] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 42.958 ms, result 0 00:29:18.311 00:29:18.311 00:29:18.311 06:16:09 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:29:18.311 [2024-07-13 06:16:09.955645] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:29:18.311 [2024-07-13 06:16:09.955837] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98100 ] 00:29:18.624 [2024-07-13 06:16:10.098400] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.624 [2024-07-13 06:16:10.132017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.624 [2024-07-13 06:16:10.211219] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:18.624 [2024-07-13 06:16:10.211306] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:18.889 [2024-07-13 06:16:10.365807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.889 [2024-07-13 06:16:10.365853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:18.889 [2024-07-13 06:16:10.365886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:18.889 [2024-07-13 06:16:10.365896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.889 [2024-07-13 06:16:10.365963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.889 [2024-07-13 06:16:10.365980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:18.889 [2024-07-13 06:16:10.366003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:18.889 [2024-07-13 06:16:10.366012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.889 [2024-07-13 06:16:10.366046] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:18.889 [2024-07-13 06:16:10.366367] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:18.889 [2024-07-13 06:16:10.366396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.889 [2024-07-13 06:16:10.366408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:18.889 [2024-07-13 06:16:10.366422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:29:18.889 [2024-07-13 06:16:10.366433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.889 [2024-07-13 06:16:10.366913] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:18.889 [2024-07-13 06:16:10.366961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.889 [2024-07-13 06:16:10.366973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:18.889 [2024-07-13 06:16:10.367000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:29:18.889 [2024-07-13 06:16:10.367010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.889 [2024-07-13 06:16:10.367062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.889 [2024-07-13 06:16:10.367078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:18.889 [2024-07-13 06:16:10.367089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:29:18.889 [2024-07-13 06:16:10.367098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.889 [2024-07-13 06:16:10.367555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.889 [2024-07-13 06:16:10.367590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:18.889 [2024-07-13 06:16:10.367606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:29:18.889 [2024-07-13 06:16:10.367617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.889 [2024-07-13 06:16:10.367708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.889 [2024-07-13 06:16:10.367727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:18.889 [2024-07-13 06:16:10.367738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:29:18.889 [2024-07-13 06:16:10.367747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.889 [2024-07-13 06:16:10.367779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.889 [2024-07-13 06:16:10.367796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:18.889 [2024-07-13 06:16:10.367806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:18.889 [2024-07-13 06:16:10.367816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.890 [2024-07-13 06:16:10.367848] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:18.890 [2024-07-13 06:16:10.369317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.890 [2024-07-13 06:16:10.369353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:18.890 [2024-07-13 06:16:10.369403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.475 ms 00:29:18.890 [2024-07-13 06:16:10.369415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.890 [2024-07-13 06:16:10.369481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.890 [2024-07-13 06:16:10.369495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:18.890 [2024-07-13 06:16:10.369520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:18.890 [2024-07-13 06:16:10.369533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.890 [2024-07-13 06:16:10.369571] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:18.890 [2024-07-13 06:16:10.369610] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:18.890 [2024-07-13 06:16:10.369675] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:18.890 [2024-07-13 06:16:10.369696] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:18.890 [2024-07-13 06:16:10.369794] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:18.890 [2024-07-13 06:16:10.369808] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:18.890 [2024-07-13 06:16:10.369824] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:18.890 [2024-07-13 06:16:10.369838] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:18.890 [2024-07-13 06:16:10.369851] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:18.890 [2024-07-13 06:16:10.369861] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:18.890 [2024-07-13 06:16:10.369883] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:18.890 [2024-07-13 06:16:10.369895] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:18.890 [2024-07-13 06:16:10.369911] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:18.890 [2024-07-13 06:16:10.369922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.890 [2024-07-13 06:16:10.369932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:18.890 [2024-07-13 06:16:10.369942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:29:18.890 [2024-07-13 06:16:10.369959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.890 [2024-07-13 06:16:10.370047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.890 [2024-07-13 06:16:10.370061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:18.890 [2024-07-13 06:16:10.370072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:29:18.890 [2024-07-13 06:16:10.370081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.890 [2024-07-13 06:16:10.370192] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:18.890 [2024-07-13 06:16:10.370232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:18.890 [2024-07-13 06:16:10.370256] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:18.890 [2024-07-13 06:16:10.370274] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:18.890 [2024-07-13 06:16:10.370284] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:18.890 [2024-07-13 06:16:10.370297] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:18.890 [2024-07-13 06:16:10.370307] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:18.890 [2024-07-13 06:16:10.370317] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:18.890 [2024-07-13 06:16:10.370326] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:18.890 [2024-07-13 06:16:10.370335] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:18.890 [2024-07-13 06:16:10.370344] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:18.890 [2024-07-13 06:16:10.370353] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:18.890 [2024-07-13 06:16:10.370362] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:18.890 [2024-07-13 06:16:10.370371] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:18.890 [2024-07-13 06:16:10.370380] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:18.890 [2024-07-13 06:16:10.370389] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:18.890 [2024-07-13 06:16:10.370400] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:18.890 [2024-07-13 06:16:10.370409] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:18.890 [2024-07-13 06:16:10.370418] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:18.890 [2024-07-13 06:16:10.370427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:18.890 [2024-07-13 06:16:10.370436] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:18.890 [2024-07-13 06:16:10.370447] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:18.890 [2024-07-13 06:16:10.370457] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:18.890 [2024-07-13 06:16:10.370466] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:18.890 [2024-07-13 06:16:10.370476] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:18.890 [2024-07-13 06:16:10.370485] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:18.890 [2024-07-13 06:16:10.370493] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:18.890 [2024-07-13 06:16:10.370502] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:18.890 [2024-07-13 06:16:10.370511] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:18.890 [2024-07-13 06:16:10.370520] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:18.890 [2024-07-13 06:16:10.370529] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:18.890 [2024-07-13 06:16:10.370537] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:18.890 [2024-07-13 06:16:10.370547] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:18.890 [2024-07-13 06:16:10.370555] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:18.890 [2024-07-13 06:16:10.370564] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:18.890 [2024-07-13 06:16:10.370573] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:18.890 [2024-07-13 06:16:10.370582] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:18.890 [2024-07-13 06:16:10.370593] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:18.890 [2024-07-13 06:16:10.370603] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:18.890 [2024-07-13 06:16:10.370611] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:18.890 [2024-07-13 06:16:10.370620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:18.890 [2024-07-13 06:16:10.370629] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:18.890 [2024-07-13 06:16:10.370638] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:18.890 [2024-07-13 06:16:10.370646] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:18.890 [2024-07-13 06:16:10.370659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:18.890 [2024-07-13 06:16:10.370675] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:18.890 [2024-07-13 06:16:10.370685] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:18.890 [2024-07-13 06:16:10.370695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:18.890 [2024-07-13 06:16:10.370705] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:18.890 [2024-07-13 06:16:10.370715] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:18.890 [2024-07-13 06:16:10.370724] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:18.890 [2024-07-13 06:16:10.370733] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:18.890 [2024-07-13 06:16:10.370742] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:18.890 [2024-07-13 06:16:10.370755] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:18.890 [2024-07-13 06:16:10.370768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:18.890 [2024-07-13 06:16:10.370779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:18.890 [2024-07-13 06:16:10.370789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:18.890 [2024-07-13 06:16:10.370799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:18.890 [2024-07-13 06:16:10.370809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:18.890 [2024-07-13 06:16:10.370818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:18.890 [2024-07-13 06:16:10.370828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:18.890 [2024-07-13 06:16:10.370838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:18.890 [2024-07-13 06:16:10.370847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:18.890 [2024-07-13 06:16:10.370857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:18.890 [2024-07-13 06:16:10.370867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:18.890 [2024-07-13 06:16:10.370877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:18.890 [2024-07-13 06:16:10.370886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:18.890 [2024-07-13 06:16:10.370896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:18.890 [2024-07-13 06:16:10.370906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:18.890 [2024-07-13 06:16:10.370917] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:18.890 [2024-07-13 06:16:10.370930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:18.890 [2024-07-13 06:16:10.370940] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:18.890 [2024-07-13 06:16:10.370950] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:18.890 [2024-07-13 06:16:10.370969] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:18.891 [2024-07-13 06:16:10.370979] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:18.891 [2024-07-13 06:16:10.370990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.891 [2024-07-13 06:16:10.371000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:18.891 [2024-07-13 06:16:10.371010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.856 ms 00:29:18.891 [2024-07-13 06:16:10.371019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.891 [2024-07-13 06:16:10.384899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.891 [2024-07-13 06:16:10.384962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:18.891 [2024-07-13 06:16:10.385002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.805 ms 00:29:18.891 [2024-07-13 06:16:10.385013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.891 [2024-07-13 06:16:10.385200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.891 [2024-07-13 06:16:10.385223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:18.891 [2024-07-13 06:16:10.385236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:29:18.891 [2024-07-13 06:16:10.385246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.891 [2024-07-13 06:16:10.392254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.891 [2024-07-13 06:16:10.392292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:18.891 [2024-07-13 06:16:10.392327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.924 ms 00:29:18.891 [2024-07-13 06:16:10.392338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.891 [2024-07-13 06:16:10.392380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.891 [2024-07-13 06:16:10.392394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:18.891 [2024-07-13 06:16:10.392405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:18.891 [2024-07-13 06:16:10.392414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.891 [2024-07-13 06:16:10.392515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.891 [2024-07-13 06:16:10.392546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:18.891 [2024-07-13 06:16:10.392587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:29:18.891 [2024-07-13 06:16:10.392597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.891 [2024-07-13 06:16:10.392728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.891 [2024-07-13 06:16:10.392768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:18.891 [2024-07-13 06:16:10.392781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:29:18.891 [2024-07-13 06:16:10.392791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.891 [2024-07-13 06:16:10.397132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.891 [2024-07-13 06:16:10.397178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:18.891 [2024-07-13 06:16:10.397216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.314 ms 00:29:18.891 [2024-07-13 06:16:10.397228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.891 [2024-07-13 06:16:10.397412] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:29:18.891 [2024-07-13 06:16:10.397449] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:18.891 [2024-07-13 06:16:10.397462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.891 [2024-07-13 06:16:10.397487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:18.891 [2024-07-13 06:16:10.397502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:29:18.891 [2024-07-13 06:16:10.397512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.891 [2024-07-13 06:16:10.408406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.891 [2024-07-13 06:16:10.408460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:18.891 [2024-07-13 06:16:10.408498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.874 ms 00:29:18.891 [2024-07-13 06:16:10.408507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.891 [2024-07-13 06:16:10.408620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.891 [2024-07-13 06:16:10.408638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:18.891 [2024-07-13 06:16:10.408650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:29:18.891 [2024-07-13 06:16:10.408659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.891 [2024-07-13 06:16:10.408745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.891 [2024-07-13 06:16:10.408762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:18.891 [2024-07-13 06:16:10.408774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:18.891 [2024-07-13 06:16:10.408784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.891 [2024-07-13 06:16:10.409135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.891 [2024-07-13 06:16:10.409179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:18.891 [2024-07-13 06:16:10.409201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:29:18.891 [2024-07-13 06:16:10.409226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.891 [2024-07-13 06:16:10.409263] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:18.891 [2024-07-13 06:16:10.409287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.891 [2024-07-13 06:16:10.409307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:18.891 [2024-07-13 06:16:10.409319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:29:18.891 [2024-07-13 06:16:10.409329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.891 [2024-07-13 06:16:10.416663] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:18.891 [2024-07-13 06:16:10.416852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.891 [2024-07-13 06:16:10.416868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:18.891 [2024-07-13 06:16:10.416883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.499 ms 00:29:18.891 [2024-07-13 06:16:10.416892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.891 [2024-07-13 06:16:10.419072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.891 [2024-07-13 06:16:10.419122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:18.891 [2024-07-13 06:16:10.419168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.157 ms 00:29:18.891 [2024-07-13 06:16:10.419179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.891 [2024-07-13 06:16:10.419247] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:29:18.891 [2024-07-13 06:16:10.419938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.891 [2024-07-13 06:16:10.420020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:18.891 [2024-07-13 06:16:10.420049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:29:18.891 [2024-07-13 06:16:10.420067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.891 [2024-07-13 06:16:10.420116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.891 [2024-07-13 06:16:10.420130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:18.891 [2024-07-13 06:16:10.420141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:18.891 [2024-07-13 06:16:10.420150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.891 [2024-07-13 06:16:10.420210] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:18.891 [2024-07-13 06:16:10.420242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.891 [2024-07-13 06:16:10.420252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:18.891 [2024-07-13 06:16:10.420265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:29:18.891 [2024-07-13 06:16:10.420313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.891 [2024-07-13 06:16:10.424066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.891 [2024-07-13 06:16:10.424119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:18.891 [2024-07-13 06:16:10.424162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.731 ms 00:29:18.891 [2024-07-13 06:16:10.424174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.891 [2024-07-13 06:16:10.424252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.891 [2024-07-13 06:16:10.424269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:18.891 [2024-07-13 06:16:10.424291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:29:18.891 [2024-07-13 06:16:10.424312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.891 [2024-07-13 06:16:10.428015] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 61.010 ms, result 0 00:30:04.712  Copying: 21/1024 [MB] (21 MBps) Copying: 44/1024 [MB] (22 MBps) Copying: 66/1024 [MB] (22 MBps) Copying: 89/1024 [MB] (22 MBps) Copying: 111/1024 [MB] (22 MBps) Copying: 134/1024 [MB] (22 MBps) Copying: 156/1024 [MB] (22 MBps) Copying: 179/1024 [MB] (22 MBps) Copying: 202/1024 [MB] (23 MBps) Copying: 225/1024 [MB] (22 MBps) Copying: 247/1024 [MB] (22 MBps) Copying: 271/1024 [MB] (23 MBps) Copying: 293/1024 [MB] (22 MBps) Copying: 315/1024 [MB] (22 MBps) Copying: 337/1024 [MB] (22 MBps) Copying: 359/1024 [MB] (22 MBps) Copying: 382/1024 [MB] (22 MBps) Copying: 404/1024 [MB] (22 MBps) Copying: 427/1024 [MB] (22 MBps) Copying: 449/1024 [MB] (22 MBps) Copying: 472/1024 [MB] (22 MBps) Copying: 494/1024 [MB] (21 MBps) Copying: 515/1024 [MB] (21 MBps) Copying: 537/1024 [MB] (21 MBps) Copying: 560/1024 [MB] (22 MBps) Copying: 582/1024 [MB] (22 MBps) Copying: 605/1024 [MB] (22 MBps) Copying: 628/1024 [MB] (22 MBps) Copying: 651/1024 [MB] (23 MBps) Copying: 674/1024 [MB] (23 MBps) Copying: 697/1024 [MB] (22 MBps) Copying: 720/1024 [MB] (22 MBps) Copying: 742/1024 [MB] (22 MBps) Copying: 765/1024 [MB] (23 MBps) Copying: 788/1024 [MB] (22 MBps) Copying: 810/1024 [MB] (22 MBps) Copying: 832/1024 [MB] (22 MBps) Copying: 855/1024 [MB] (22 MBps) Copying: 877/1024 [MB] (22 MBps) Copying: 900/1024 [MB] (22 MBps) Copying: 922/1024 [MB] (22 MBps) Copying: 945/1024 [MB] (22 MBps) Copying: 967/1024 [MB] (22 MBps) Copying: 990/1024 [MB] (22 MBps) Copying: 1012/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-07-13 06:16:56.190638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.712 [2024-07-13 06:16:56.190977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:04.712 [2024-07-13 06:16:56.191196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:04.712 [2024-07-13 06:16:56.191255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.712 [2024-07-13 06:16:56.191436] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:04.712 [2024-07-13 06:16:56.191957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.712 [2024-07-13 06:16:56.192125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:04.712 [2024-07-13 06:16:56.192280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:30:04.712 [2024-07-13 06:16:56.192402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.712 [2024-07-13 06:16:56.192673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.712 [2024-07-13 06:16:56.192736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:04.712 [2024-07-13 06:16:56.192853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:30:04.712 [2024-07-13 06:16:56.192991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.712 [2024-07-13 06:16:56.193075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.712 [2024-07-13 06:16:56.193225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:04.712 [2024-07-13 06:16:56.193281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:04.712 [2024-07-13 06:16:56.193419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.712 [2024-07-13 06:16:56.193605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.712 [2024-07-13 06:16:56.193666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:04.712 [2024-07-13 06:16:56.193826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:30:04.712 [2024-07-13 06:16:56.193876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.712 [2024-07-13 06:16:56.193926] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:04.712 [2024-07-13 06:16:56.193971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133376 / 261120 wr_cnt: 1 state: open 00:30:04.712 [2024-07-13 06:16:56.194098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.194847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.195025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.195168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.195369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.195430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.195548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.195614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.195665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.195829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.195891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.196993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.197003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.197013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.197024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.197034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.197043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.197053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.197062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.197072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.197082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.197137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.197160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:04.712 [2024-07-13 06:16:56.197173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:04.713 [2024-07-13 06:16:56.197628] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:04.713 [2024-07-13 06:16:56.197638] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2bf28cc8-5557-4d51-be79-31ccfa58f7c9 00:30:04.713 [2024-07-13 06:16:56.197648] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133376 00:30:04.713 [2024-07-13 06:16:56.197657] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 12320 00:30:04.713 [2024-07-13 06:16:56.197676] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 12288 00:30:04.713 [2024-07-13 06:16:56.197687] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0026 00:30:04.713 [2024-07-13 06:16:56.197695] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:04.713 [2024-07-13 06:16:56.197705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:04.713 [2024-07-13 06:16:56.197714] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:04.713 [2024-07-13 06:16:56.197722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:04.713 [2024-07-13 06:16:56.197730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:04.713 [2024-07-13 06:16:56.197740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.713 [2024-07-13 06:16:56.197751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:04.713 [2024-07-13 06:16:56.197760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.816 ms 00:30:04.713 [2024-07-13 06:16:56.197769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.713 [2024-07-13 06:16:56.199030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.713 [2024-07-13 06:16:56.199052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:04.713 [2024-07-13 06:16:56.199064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.236 ms 00:30:04.713 [2024-07-13 06:16:56.199079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.713 [2024-07-13 06:16:56.199150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.713 [2024-07-13 06:16:56.199178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:04.713 [2024-07-13 06:16:56.199206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:30:04.713 [2024-07-13 06:16:56.199220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.713 [2024-07-13 06:16:56.203179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.713 [2024-07-13 06:16:56.203206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:04.713 [2024-07-13 06:16:56.203228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.713 [2024-07-13 06:16:56.203238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.713 [2024-07-13 06:16:56.203287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.713 [2024-07-13 06:16:56.203301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:04.713 [2024-07-13 06:16:56.203311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.713 [2024-07-13 06:16:56.203320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.713 [2024-07-13 06:16:56.203393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.713 [2024-07-13 06:16:56.203431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:04.713 [2024-07-13 06:16:56.203442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.713 [2024-07-13 06:16:56.203451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.713 [2024-07-13 06:16:56.203470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.713 [2024-07-13 06:16:56.203482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:04.713 [2024-07-13 06:16:56.203502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.713 [2024-07-13 06:16:56.203511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.713 [2024-07-13 06:16:56.211041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.713 [2024-07-13 06:16:56.211106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:04.713 [2024-07-13 06:16:56.211152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.713 [2024-07-13 06:16:56.211172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.713 [2024-07-13 06:16:56.218063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.713 [2024-07-13 06:16:56.218106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:04.713 [2024-07-13 06:16:56.218136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.713 [2024-07-13 06:16:56.218156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.713 [2024-07-13 06:16:56.218195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.713 [2024-07-13 06:16:56.218214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:04.713 [2024-07-13 06:16:56.218224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.713 [2024-07-13 06:16:56.218233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.713 [2024-07-13 06:16:56.218289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.713 [2024-07-13 06:16:56.218303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:04.713 [2024-07-13 06:16:56.218313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.713 [2024-07-13 06:16:56.218322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.713 [2024-07-13 06:16:56.218414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.713 [2024-07-13 06:16:56.218432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:04.713 [2024-07-13 06:16:56.218448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.713 [2024-07-13 06:16:56.218458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.713 [2024-07-13 06:16:56.218491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.713 [2024-07-13 06:16:56.218507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:04.713 [2024-07-13 06:16:56.218517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.713 [2024-07-13 06:16:56.218526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.713 [2024-07-13 06:16:56.218566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.713 [2024-07-13 06:16:56.218579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:04.713 [2024-07-13 06:16:56.218596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.713 [2024-07-13 06:16:56.218605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.713 [2024-07-13 06:16:56.218659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.713 [2024-07-13 06:16:56.218672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:04.713 [2024-07-13 06:16:56.218682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.713 [2024-07-13 06:16:56.218691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.713 [2024-07-13 06:16:56.218857] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 28.176 ms, result 0 00:30:04.713 00:30:04.713 00:30:04.713 06:16:56 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:06.618 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:06.618 06:16:58 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:30:06.618 06:16:58 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:30:06.618 06:16:58 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:06.618 06:16:58 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:06.619 06:16:58 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:06.619 06:16:58 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 96561 00:30:06.619 06:16:58 ftl.ftl_restore_fast -- common/autotest_common.sh@948 -- # '[' -z 96561 ']' 00:30:06.619 06:16:58 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # kill -0 96561 00:30:06.619 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (96561) - No such process 00:30:06.619 Process with pid 96561 is not found 00:30:06.619 06:16:58 ftl.ftl_restore_fast -- common/autotest_common.sh@975 -- # echo 'Process with pid 96561 is not found' 00:30:06.619 06:16:58 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:30:06.619 Remove shared memory files 00:30:06.619 06:16:58 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:06.619 06:16:58 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:30:06.619 06:16:58 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_2bf28cc8-5557-4d51-be79-31ccfa58f7c9_band_md /dev/hugepages/ftl_2bf28cc8-5557-4d51-be79-31ccfa58f7c9_l2p_l1 /dev/hugepages/ftl_2bf28cc8-5557-4d51-be79-31ccfa58f7c9_l2p_l2 /dev/hugepages/ftl_2bf28cc8-5557-4d51-be79-31ccfa58f7c9_l2p_l2_ctx /dev/hugepages/ftl_2bf28cc8-5557-4d51-be79-31ccfa58f7c9_nvc_md /dev/hugepages/ftl_2bf28cc8-5557-4d51-be79-31ccfa58f7c9_p2l_pool /dev/hugepages/ftl_2bf28cc8-5557-4d51-be79-31ccfa58f7c9_sb /dev/hugepages/ftl_2bf28cc8-5557-4d51-be79-31ccfa58f7c9_sb_shm /dev/hugepages/ftl_2bf28cc8-5557-4d51-be79-31ccfa58f7c9_trim_bitmap /dev/hugepages/ftl_2bf28cc8-5557-4d51-be79-31ccfa58f7c9_trim_log /dev/hugepages/ftl_2bf28cc8-5557-4d51-be79-31ccfa58f7c9_trim_md /dev/hugepages/ftl_2bf28cc8-5557-4d51-be79-31ccfa58f7c9_vmap 00:30:06.619 06:16:58 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:30:06.619 06:16:58 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:06.619 06:16:58 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:30:06.619 00:30:06.619 real 3m23.348s 00:30:06.619 user 3m10.729s 00:30:06.619 sys 0m13.644s 00:30:06.619 06:16:58 ftl.ftl_restore_fast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:06.619 06:16:58 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:30:06.619 ************************************ 00:30:06.619 END TEST ftl_restore_fast 00:30:06.619 ************************************ 00:30:06.877 06:16:58 ftl -- common/autotest_common.sh@1142 -- # return 0 00:30:06.877 06:16:58 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:30:06.877 06:16:58 ftl -- ftl/ftl.sh@14 -- # killprocess 89229 00:30:06.877 06:16:58 ftl -- common/autotest_common.sh@948 -- # '[' -z 89229 ']' 00:30:06.877 06:16:58 ftl -- common/autotest_common.sh@952 -- # kill -0 89229 00:30:06.877 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (89229) - No such process 00:30:06.877 Process with pid 89229 is not found 00:30:06.877 06:16:58 ftl -- common/autotest_common.sh@975 -- # echo 'Process with pid 89229 is not found' 00:30:06.877 06:16:58 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:30:06.877 06:16:58 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=98590 00:30:06.877 06:16:58 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:06.877 06:16:58 ftl -- ftl/ftl.sh@20 -- # waitforlisten 98590 00:30:06.877 06:16:58 ftl -- common/autotest_common.sh@829 -- # '[' -z 98590 ']' 00:30:06.877 06:16:58 ftl -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:06.877 06:16:58 ftl -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:06.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:06.877 06:16:58 ftl -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:06.877 06:16:58 ftl -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:06.877 06:16:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:06.877 [2024-07-13 06:16:58.470564] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:30:06.877 [2024-07-13 06:16:58.470758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98590 ] 00:30:07.136 [2024-07-13 06:16:58.618871] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.136 [2024-07-13 06:16:58.662214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.704 06:16:59 ftl -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:07.704 06:16:59 ftl -- common/autotest_common.sh@862 -- # return 0 00:30:07.704 06:16:59 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:07.963 nvme0n1 00:30:07.963 06:16:59 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:30:07.963 06:16:59 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:07.963 06:16:59 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:08.222 06:16:59 ftl -- ftl/common.sh@28 -- # stores=dd93c6ae-d78a-409e-8412-e1ad8618052c 00:30:08.222 06:16:59 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:30:08.222 06:16:59 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dd93c6ae-d78a-409e-8412-e1ad8618052c 00:30:08.481 06:17:00 ftl -- ftl/ftl.sh@23 -- # killprocess 98590 00:30:08.481 06:17:00 ftl -- common/autotest_common.sh@948 -- # '[' -z 98590 ']' 00:30:08.481 06:17:00 ftl -- common/autotest_common.sh@952 -- # kill -0 98590 00:30:08.481 06:17:00 ftl -- common/autotest_common.sh@953 -- # uname 00:30:08.481 06:17:00 ftl -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:08.481 06:17:00 ftl -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98590 00:30:08.481 06:17:00 ftl -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:08.481 06:17:00 ftl -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:08.481 killing process with pid 98590 00:30:08.481 06:17:00 ftl -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98590' 00:30:08.481 06:17:00 ftl -- common/autotest_common.sh@967 -- # kill 98590 00:30:08.481 06:17:00 ftl -- common/autotest_common.sh@972 -- # wait 98590 00:30:08.740 06:17:00 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:08.999 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:08.999 Waiting for block devices as requested 00:30:09.257 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:09.257 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:09.257 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:30:09.257 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:30:14.525 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:30:14.525 06:17:06 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:30:14.525 Remove shared memory files 00:30:14.525 06:17:06 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:14.525 06:17:06 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:30:14.525 06:17:06 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:30:14.525 06:17:06 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:30:14.525 06:17:06 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:14.525 06:17:06 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:30:14.525 00:30:14.525 real 14m11.294s 00:30:14.525 user 16m30.515s 00:30:14.525 sys 1m36.704s 00:30:14.525 06:17:06 ftl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:14.525 06:17:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:14.525 ************************************ 00:30:14.525 END TEST ftl 00:30:14.525 ************************************ 00:30:14.525 06:17:06 -- common/autotest_common.sh@1142 -- # return 0 00:30:14.525 06:17:06 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:30:14.525 06:17:06 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:30:14.525 06:17:06 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:30:14.525 06:17:06 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:30:14.525 06:17:06 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:30:14.525 06:17:06 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:30:14.525 06:17:06 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:30:14.525 06:17:06 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:30:14.525 06:17:06 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:30:14.525 06:17:06 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:30:14.525 06:17:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:14.525 06:17:06 -- common/autotest_common.sh@10 -- # set +x 00:30:14.525 06:17:06 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:30:14.525 06:17:06 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:30:14.525 06:17:06 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:30:14.525 06:17:06 -- common/autotest_common.sh@10 -- # set +x 00:30:15.952 INFO: APP EXITING 00:30:15.952 INFO: killing all VMs 00:30:15.952 INFO: killing vhost app 00:30:15.952 INFO: EXIT DONE 00:30:16.520 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:16.780 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:30:16.780 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:30:16.780 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:30:16.780 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:30:17.347 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:17.605 Cleaning 00:30:17.605 Removing: /var/run/dpdk/spdk0/config 00:30:17.605 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:17.605 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:17.605 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:17.605 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:17.605 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:17.605 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:17.605 Removing: /var/run/dpdk/spdk0 00:30:17.605 Removing: /var/run/dpdk/spdk_pid73844 00:30:17.605 Removing: /var/run/dpdk/spdk_pid73995 00:30:17.605 Removing: /var/run/dpdk/spdk_pid74188 00:30:17.606 Removing: /var/run/dpdk/spdk_pid74276 00:30:17.606 Removing: /var/run/dpdk/spdk_pid74304 00:30:17.606 Removing: /var/run/dpdk/spdk_pid74416 00:30:17.606 Removing: /var/run/dpdk/spdk_pid74428 00:30:17.606 Removing: /var/run/dpdk/spdk_pid74587 00:30:17.606 Removing: /var/run/dpdk/spdk_pid74652 00:30:17.606 Removing: /var/run/dpdk/spdk_pid74724 00:30:17.606 Removing: /var/run/dpdk/spdk_pid74803 00:30:17.606 Removing: /var/run/dpdk/spdk_pid74881 00:30:17.606 Removing: /var/run/dpdk/spdk_pid74915 00:30:17.606 Removing: /var/run/dpdk/spdk_pid74946 00:30:17.606 Removing: /var/run/dpdk/spdk_pid75014 00:30:17.606 Removing: /var/run/dpdk/spdk_pid75111 00:30:17.606 Removing: /var/run/dpdk/spdk_pid75543 00:30:17.606 Removing: /var/run/dpdk/spdk_pid75592 00:30:17.606 Removing: /var/run/dpdk/spdk_pid75643 00:30:17.606 Removing: /var/run/dpdk/spdk_pid75659 00:30:17.606 Removing: /var/run/dpdk/spdk_pid75722 00:30:17.606 Removing: /var/run/dpdk/spdk_pid75738 00:30:17.606 Removing: /var/run/dpdk/spdk_pid75802 00:30:17.606 Removing: /var/run/dpdk/spdk_pid75818 00:30:17.606 Removing: /var/run/dpdk/spdk_pid75865 00:30:17.606 Removing: /var/run/dpdk/spdk_pid75883 00:30:17.863 Removing: /var/run/dpdk/spdk_pid75930 00:30:17.863 Removing: /var/run/dpdk/spdk_pid75943 00:30:17.863 Removing: /var/run/dpdk/spdk_pid76068 00:30:17.863 Removing: /var/run/dpdk/spdk_pid76099 00:30:17.863 Removing: /var/run/dpdk/spdk_pid76180 00:30:17.863 Removing: /var/run/dpdk/spdk_pid76228 00:30:17.863 Removing: /var/run/dpdk/spdk_pid76254 00:30:17.863 Removing: /var/run/dpdk/spdk_pid76315 00:30:17.863 Removing: /var/run/dpdk/spdk_pid76351 00:30:17.863 Removing: /var/run/dpdk/spdk_pid76386 00:30:17.863 Removing: /var/run/dpdk/spdk_pid76416 00:30:17.863 Removing: /var/run/dpdk/spdk_pid76457 00:30:17.863 Removing: /var/run/dpdk/spdk_pid76487 00:30:17.863 Removing: /var/run/dpdk/spdk_pid76523 00:30:17.863 Removing: /var/run/dpdk/spdk_pid76558 00:30:17.863 Removing: /var/run/dpdk/spdk_pid76588 00:30:17.863 Removing: /var/run/dpdk/spdk_pid76624 00:30:17.863 Removing: /var/run/dpdk/spdk_pid76659 00:30:17.863 Removing: /var/run/dpdk/spdk_pid76695 00:30:17.863 Removing: /var/run/dpdk/spdk_pid76725 00:30:17.863 Removing: /var/run/dpdk/spdk_pid76760 00:30:17.863 Removing: /var/run/dpdk/spdk_pid76796 00:30:17.863 Removing: /var/run/dpdk/spdk_pid76830 00:30:17.863 Removing: /var/run/dpdk/spdk_pid76867 00:30:17.863 Removing: /var/run/dpdk/spdk_pid76900 00:30:17.863 Removing: /var/run/dpdk/spdk_pid76944 00:30:17.863 Removing: /var/run/dpdk/spdk_pid76974 00:30:17.863 Removing: /var/run/dpdk/spdk_pid77005 00:30:17.863 Removing: /var/run/dpdk/spdk_pid77076 00:30:17.863 Removing: /var/run/dpdk/spdk_pid77170 00:30:17.863 Removing: /var/run/dpdk/spdk_pid77315 00:30:17.863 Removing: /var/run/dpdk/spdk_pid77388 00:30:17.863 Removing: /var/run/dpdk/spdk_pid77419 00:30:17.863 Removing: /var/run/dpdk/spdk_pid77853 00:30:17.863 Removing: /var/run/dpdk/spdk_pid77946 00:30:17.863 Removing: /var/run/dpdk/spdk_pid78044 00:30:17.863 Removing: /var/run/dpdk/spdk_pid78081 00:30:17.863 Removing: /var/run/dpdk/spdk_pid78106 00:30:17.863 Removing: /var/run/dpdk/spdk_pid78181 00:30:17.863 Removing: /var/run/dpdk/spdk_pid78791 00:30:17.863 Removing: /var/run/dpdk/spdk_pid78822 00:30:17.863 Removing: /var/run/dpdk/spdk_pid79299 00:30:17.863 Removing: /var/run/dpdk/spdk_pid79386 00:30:17.863 Removing: /var/run/dpdk/spdk_pid79484 00:30:17.863 Removing: /var/run/dpdk/spdk_pid79526 00:30:17.863 Removing: /var/run/dpdk/spdk_pid79546 00:30:17.863 Removing: /var/run/dpdk/spdk_pid79577 00:30:17.863 Removing: /var/run/dpdk/spdk_pid81385 00:30:17.863 Removing: /var/run/dpdk/spdk_pid81511 00:30:17.863 Removing: /var/run/dpdk/spdk_pid81515 00:30:17.863 Removing: /var/run/dpdk/spdk_pid81527 00:30:17.863 Removing: /var/run/dpdk/spdk_pid81574 00:30:17.863 Removing: /var/run/dpdk/spdk_pid81578 00:30:17.863 Removing: /var/run/dpdk/spdk_pid81590 00:30:17.863 Removing: /var/run/dpdk/spdk_pid81629 00:30:17.863 Removing: /var/run/dpdk/spdk_pid81633 00:30:17.863 Removing: /var/run/dpdk/spdk_pid81645 00:30:17.863 Removing: /var/run/dpdk/spdk_pid81691 00:30:17.863 Removing: /var/run/dpdk/spdk_pid81695 00:30:17.863 Removing: /var/run/dpdk/spdk_pid81707 00:30:17.863 Removing: /var/run/dpdk/spdk_pid83049 00:30:17.863 Removing: /var/run/dpdk/spdk_pid83133 00:30:17.863 Removing: /var/run/dpdk/spdk_pid84525 00:30:17.863 Removing: /var/run/dpdk/spdk_pid85850 00:30:17.863 Removing: /var/run/dpdk/spdk_pid85931 00:30:17.863 Removing: /var/run/dpdk/spdk_pid86013 00:30:17.863 Removing: /var/run/dpdk/spdk_pid86084 00:30:17.863 Removing: /var/run/dpdk/spdk_pid86183 00:30:17.863 Removing: /var/run/dpdk/spdk_pid86252 00:30:17.863 Removing: /var/run/dpdk/spdk_pid86376 00:30:17.863 Removing: /var/run/dpdk/spdk_pid86729 00:30:17.863 Removing: /var/run/dpdk/spdk_pid86749 00:30:17.863 Removing: /var/run/dpdk/spdk_pid87199 00:30:17.863 Removing: /var/run/dpdk/spdk_pid87369 00:30:17.863 Removing: /var/run/dpdk/spdk_pid87463 00:30:17.863 Removing: /var/run/dpdk/spdk_pid87558 00:30:17.863 Removing: /var/run/dpdk/spdk_pid87594 00:30:17.863 Removing: /var/run/dpdk/spdk_pid87620 00:30:17.863 Removing: /var/run/dpdk/spdk_pid87899 00:30:17.863 Removing: /var/run/dpdk/spdk_pid87933 00:30:17.863 Removing: /var/run/dpdk/spdk_pid87983 00:30:17.863 Removing: /var/run/dpdk/spdk_pid88322 00:30:17.863 Removing: /var/run/dpdk/spdk_pid88459 00:30:18.120 Removing: /var/run/dpdk/spdk_pid89229 00:30:18.120 Removing: /var/run/dpdk/spdk_pid89342 00:30:18.120 Removing: /var/run/dpdk/spdk_pid89500 00:30:18.120 Removing: /var/run/dpdk/spdk_pid89587 00:30:18.120 Removing: /var/run/dpdk/spdk_pid89932 00:30:18.120 Removing: /var/run/dpdk/spdk_pid90185 00:30:18.120 Removing: /var/run/dpdk/spdk_pid90515 00:30:18.120 Removing: /var/run/dpdk/spdk_pid90688 00:30:18.120 Removing: /var/run/dpdk/spdk_pid90812 00:30:18.120 Removing: /var/run/dpdk/spdk_pid90848 00:30:18.120 Removing: /var/run/dpdk/spdk_pid90976 00:30:18.120 Removing: /var/run/dpdk/spdk_pid90990 00:30:18.120 Removing: /var/run/dpdk/spdk_pid91026 00:30:18.120 Removing: /var/run/dpdk/spdk_pid91216 00:30:18.120 Removing: /var/run/dpdk/spdk_pid91419 00:30:18.120 Removing: /var/run/dpdk/spdk_pid91854 00:30:18.120 Removing: /var/run/dpdk/spdk_pid92330 00:30:18.120 Removing: /var/run/dpdk/spdk_pid92792 00:30:18.120 Removing: /var/run/dpdk/spdk_pid93299 00:30:18.120 Removing: /var/run/dpdk/spdk_pid93431 00:30:18.120 Removing: /var/run/dpdk/spdk_pid93520 00:30:18.120 Removing: /var/run/dpdk/spdk_pid94190 00:30:18.120 Removing: /var/run/dpdk/spdk_pid94255 00:30:18.120 Removing: /var/run/dpdk/spdk_pid94702 00:30:18.120 Removing: /var/run/dpdk/spdk_pid95133 00:30:18.120 Removing: /var/run/dpdk/spdk_pid95669 00:30:18.120 Removing: /var/run/dpdk/spdk_pid95776 00:30:18.120 Removing: /var/run/dpdk/spdk_pid95812 00:30:18.120 Removing: /var/run/dpdk/spdk_pid95865 00:30:18.120 Removing: /var/run/dpdk/spdk_pid95919 00:30:18.120 Removing: /var/run/dpdk/spdk_pid95975 00:30:18.120 Removing: /var/run/dpdk/spdk_pid96149 00:30:18.120 Removing: /var/run/dpdk/spdk_pid96205 00:30:18.120 Removing: /var/run/dpdk/spdk_pid96262 00:30:18.120 Removing: /var/run/dpdk/spdk_pid96337 00:30:18.120 Removing: /var/run/dpdk/spdk_pid96354 00:30:18.120 Removing: /var/run/dpdk/spdk_pid96423 00:30:18.120 Removing: /var/run/dpdk/spdk_pid96561 00:30:18.120 Removing: /var/run/dpdk/spdk_pid96757 00:30:18.120 Removing: /var/run/dpdk/spdk_pid97178 00:30:18.120 Removing: /var/run/dpdk/spdk_pid97645 00:30:18.120 Removing: /var/run/dpdk/spdk_pid98100 00:30:18.120 Removing: /var/run/dpdk/spdk_pid98590 00:30:18.120 Clean 00:30:18.120 06:17:09 -- common/autotest_common.sh@1451 -- # return 0 00:30:18.120 06:17:09 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:30:18.120 06:17:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:18.120 06:17:09 -- common/autotest_common.sh@10 -- # set +x 00:30:18.120 06:17:09 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:30:18.120 06:17:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:18.120 06:17:09 -- common/autotest_common.sh@10 -- # set +x 00:30:18.378 06:17:09 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:18.378 06:17:09 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:30:18.378 06:17:09 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:30:18.378 06:17:09 -- spdk/autotest.sh@391 -- # hash lcov 00:30:18.378 06:17:09 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:18.378 06:17:09 -- spdk/autotest.sh@393 -- # hostname 00:30:18.378 06:17:09 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:30:18.635 geninfo: WARNING: invalid characters removed from testname! 00:30:40.630 06:17:31 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:43.160 06:17:34 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:45.692 06:17:37 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:48.224 06:17:39 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:50.755 06:17:41 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:52.657 06:17:44 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:55.191 06:17:46 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:55.191 06:17:46 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:55.191 06:17:46 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:55.191 06:17:46 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:55.191 06:17:46 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:55.191 06:17:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.191 06:17:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.191 06:17:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.191 06:17:46 -- paths/export.sh@5 -- $ export PATH 00:30:55.191 06:17:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:55.191 06:17:46 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:30:55.191 06:17:46 -- common/autobuild_common.sh@444 -- $ date +%s 00:30:55.191 06:17:46 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720851466.XXXXXX 00:30:55.191 06:17:46 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720851466.M9nyZf 00:30:55.191 06:17:46 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:30:55.191 06:17:46 -- common/autobuild_common.sh@450 -- $ '[' -n v22.11.4 ']' 00:30:55.191 06:17:46 -- common/autobuild_common.sh@451 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:30:55.191 06:17:46 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:30:55.191 06:17:46 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:30:55.191 06:17:46 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:30:55.191 06:17:46 -- common/autobuild_common.sh@460 -- $ get_config_params 00:30:55.191 06:17:46 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:30:55.191 06:17:46 -- common/autotest_common.sh@10 -- $ set +x 00:30:55.191 06:17:46 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:30:55.191 06:17:46 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:30:55.191 06:17:46 -- pm/common@17 -- $ local monitor 00:30:55.191 06:17:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:55.191 06:17:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:55.191 06:17:46 -- pm/common@25 -- $ sleep 1 00:30:55.191 06:17:46 -- pm/common@21 -- $ date +%s 00:30:55.191 06:17:46 -- pm/common@21 -- $ date +%s 00:30:55.191 06:17:46 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720851466 00:30:55.191 06:17:46 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720851466 00:30:55.191 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720851466_collect-vmstat.pm.log 00:30:55.191 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720851466_collect-cpu-load.pm.log 00:30:56.127 06:17:47 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:30:56.127 06:17:47 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:30:56.127 06:17:47 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:30:56.127 06:17:47 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:56.127 06:17:47 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:30:56.127 06:17:47 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:56.127 06:17:47 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:56.127 06:17:47 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:56.127 06:17:47 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:56.127 06:17:47 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:56.127 06:17:47 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:56.127 06:17:47 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:56.127 06:17:47 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:56.127 06:17:47 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:56.127 06:17:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:56.127 06:17:47 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:30:56.127 06:17:47 -- pm/common@44 -- $ pid=100278 00:30:56.127 06:17:47 -- pm/common@50 -- $ kill -TERM 100278 00:30:56.127 06:17:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:56.127 06:17:47 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:30:56.127 06:17:47 -- pm/common@44 -- $ pid=100280 00:30:56.127 06:17:47 -- pm/common@50 -- $ kill -TERM 100280 00:30:56.127 + [[ -n 5938 ]] 00:30:56.128 + sudo kill 5938 00:30:57.073 [Pipeline] } 00:30:57.096 [Pipeline] // timeout 00:30:57.102 [Pipeline] } 00:30:57.124 [Pipeline] // stage 00:30:57.129 [Pipeline] } 00:30:57.150 [Pipeline] // catchError 00:30:57.160 [Pipeline] stage 00:30:57.163 [Pipeline] { (Stop VM) 00:30:57.179 [Pipeline] sh 00:30:57.457 + vagrant halt 00:31:00.735 ==> default: Halting domain... 00:31:07.308 [Pipeline] sh 00:31:07.589 + vagrant destroy -f 00:31:10.877 ==> default: Removing domain... 00:31:10.889 [Pipeline] sh 00:31:11.170 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:31:11.180 [Pipeline] } 00:31:11.199 [Pipeline] // stage 00:31:11.204 [Pipeline] } 00:31:11.221 [Pipeline] // dir 00:31:11.227 [Pipeline] } 00:31:11.245 [Pipeline] // wrap 00:31:11.251 [Pipeline] } 00:31:11.274 [Pipeline] // catchError 00:31:11.321 [Pipeline] stage 00:31:11.323 [Pipeline] { (Epilogue) 00:31:11.332 [Pipeline] sh 00:31:11.607 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:16.961 [Pipeline] catchError 00:31:16.963 [Pipeline] { 00:31:16.983 [Pipeline] sh 00:31:17.270 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:17.528 Artifacts sizes are good 00:31:17.538 [Pipeline] } 00:31:17.558 [Pipeline] // catchError 00:31:17.570 [Pipeline] archiveArtifacts 00:31:17.578 Archiving artifacts 00:31:17.720 [Pipeline] cleanWs 00:31:17.731 [WS-CLEANUP] Deleting project workspace... 00:31:17.731 [WS-CLEANUP] Deferred wipeout is used... 00:31:17.737 [WS-CLEANUP] done 00:31:17.739 [Pipeline] } 00:31:17.757 [Pipeline] // stage 00:31:17.763 [Pipeline] } 00:31:17.781 [Pipeline] // node 00:31:17.787 [Pipeline] End of Pipeline 00:31:17.820 Finished: SUCCESS